Test Report: QEMU_macOS 19348

                    
                      ed915dc6df1b6eb65e62a5b1fde6a752900efcab:2024-07-29:35561
                    
                

Test fail (94/278)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.69
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.95
55 TestCertOptions 11.96
56 TestCertExpiration 197.39
57 TestDockerFlags 12.56
58 TestForceSystemdFlag 12.24
59 TestForceSystemdEnv 10.13
104 TestFunctional/parallel/ServiceCmdConnect 38.1
176 TestMultiControlPlane/serial/StopSecondaryNode 312.3
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.14
178 TestMultiControlPlane/serial/RestartSecondaryNode 305.31
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.57
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
183 TestMultiControlPlane/serial/StopCluster 207.32
186 TestImageBuild/serial/Setup 10.22
189 TestJSONOutput/start/Command 9.72
195 TestJSONOutput/pause/Command 0.08
201 TestJSONOutput/unpause/Command 0.04
218 TestMinikubeProfile 10.05
221 TestMountStart/serial/StartWithMountFirst 9.99
224 TestMultiNode/serial/FreshStart2Nodes 9.89
225 TestMultiNode/serial/DeployApp2Nodes 91.95
226 TestMultiNode/serial/PingHostFrom2Pods 0.08
227 TestMultiNode/serial/AddNode 0.07
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.07
230 TestMultiNode/serial/CopyFile 0.06
231 TestMultiNode/serial/StopNode 0.13
232 TestMultiNode/serial/StartAfterStop 57.83
233 TestMultiNode/serial/RestartKeepsNodes 9.37
234 TestMultiNode/serial/DeleteNode 0.1
235 TestMultiNode/serial/StopMultiNode 3.63
236 TestMultiNode/serial/RestartMultiNode 5.26
237 TestMultiNode/serial/ValidateNameConflict 20.54
241 TestPreload 9.93
243 TestScheduledStopUnix 10.06
244 TestSkaffold 13.11
247 TestRunningBinaryUpgrade 604.61
249 TestKubernetesUpgrade 17.31
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.5
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.16
265 TestStoppedBinaryUpgrade/Upgrade 581.02
267 TestPause/serial/Start 10.08
277 TestNoKubernetes/serial/StartWithK8s 9.98
278 TestNoKubernetes/serial/StartWithStopK8s 5.32
279 TestNoKubernetes/serial/Start 5.3
283 TestNoKubernetes/serial/StartNoArgs 5.32
285 TestNetworkPlugins/group/auto/Start 9.94
286 TestNetworkPlugins/group/calico/Start 9.84
287 TestNetworkPlugins/group/custom-flannel/Start 9.78
288 TestNetworkPlugins/group/false/Start 9.86
289 TestNetworkPlugins/group/kindnet/Start 9.91
290 TestNetworkPlugins/group/flannel/Start 9.85
291 TestNetworkPlugins/group/enable-default-cni/Start 9.91
292 TestNetworkPlugins/group/bridge/Start 10.01
293 TestNetworkPlugins/group/kubenet/Start 9.89
296 TestStartStop/group/old-k8s-version/serial/FirstStart 9.98
297 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/old-k8s-version/serial/SecondStart 5.21
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/old-k8s-version/serial/Pause 0.1
307 TestStartStop/group/no-preload/serial/FirstStart 9.92
308 TestStartStop/group/no-preload/serial/DeployApp 0.09
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
312 TestStartStop/group/embed-certs/serial/FirstStart 10.13
314 TestStartStop/group/no-preload/serial/SecondStart 7.52
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
318 TestStartStop/group/no-preload/serial/Pause 0.11
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.52
321 TestStartStop/group/embed-certs/serial/DeployApp 0.1
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
325 TestStartStop/group/embed-certs/serial/SecondStart 6.25
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
331 TestStartStop/group/embed-certs/serial/Pause 0.11
334 TestStartStop/group/newest-cni/serial/FirstStart 9.93
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.55
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
340 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
342 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.06
343 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
345 TestStartStop/group/newest-cni/serial/SecondStart 5.25
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
349 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (27.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-541000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-541000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (27.690291791s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7ff85702-b905-42dc-8f81-b1568b26384f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-541000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd0b801b-b32f-44aa-aee0-089023d4e545","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19348"}}
	{"specversion":"1.0","id":"f68e08c4-e8f0-441c-a666-76a2153c64cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig"}}
	{"specversion":"1.0","id":"0ae9bf9d-2f81-4c2a-8cf1-b6f8bdf37b94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"78fdc6fa-40bc-4159-9d4f-83650b8b5870","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7c8c738e-e954-4bfe-8b82-7dbac86d4847","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube"}}
	{"specversion":"1.0","id":"e4d41ff8-8a46-4f74-aabe-1ff03370561d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"0a6e2d86-6a15-4fe6-b708-44ce304e8c2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b704006f-98e9-417a-a8f8-abd1ae12ed89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"40ae9182-5b1a-4ff6-9b67-232fe4766760","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4030ef05-fdd3-4dc4-9090-9e0ff675e894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-541000\" primary control-plane node in \"download-only-541000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"71cfa5b0-cd8a-48e4-96af-fce840bd32af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce32c5e0-4923-46b4-956c-488dc28c0833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60] Decompressors:map[bz2:0x14000711cf0 gz:0x14000711cf8 tar:0x14000711ca0 tar.bz2:0x14000711cb0 tar.gz:0x14000711cc0 tar.xz:0x14000711cd0 tar.zst:0x14000711ce0 tbz2:0x14000711cb0 tgz:0x14
000711cc0 txz:0x14000711cd0 tzst:0x14000711ce0 xz:0x14000711d00 zip:0x14000711d10 zst:0x14000711d08] Getters:map[file:0x140000c2d10 http:0x140000b4550 https:0x140000b45a0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"55dfbd0b-0529-4179-900a-b354c081ee7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 15:46:28.636182    1716 out.go:291] Setting OutFile to fd 1 ...
	I0729 15:46:28.636409    1716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:46:28.636412    1716 out.go:304] Setting ErrFile to fd 2...
	I0729 15:46:28.636415    1716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:46:28.636539    1716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	W0729 15:46:28.636627    1716 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19348-1218/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19348-1218/.minikube/config/config.json: no such file or directory
	I0729 15:46:28.637918    1716 out.go:298] Setting JSON to true
	I0729 15:46:28.655055    1716 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":955,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 15:46:28.655119    1716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 15:46:28.660827    1716 out.go:97] [download-only-541000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 15:46:28.661017    1716 notify.go:220] Checking for updates...
	W0729 15:46:28.661085    1716 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 15:46:28.663779    1716 out.go:169] MINIKUBE_LOCATION=19348
	I0729 15:46:28.666905    1716 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 15:46:28.671786    1716 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 15:46:28.675818    1716 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 15:46:28.678854    1716 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	W0729 15:46:28.684882    1716 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 15:46:28.685129    1716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 15:46:28.689812    1716 out.go:97] Using the qemu2 driver based on user configuration
	I0729 15:46:28.689832    1716 start.go:297] selected driver: qemu2
	I0729 15:46:28.689856    1716 start.go:901] validating driver "qemu2" against <nil>
	I0729 15:46:28.689927    1716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 15:46:28.692814    1716 out.go:169] Automatically selected the socket_vmnet network
	I0729 15:46:28.698572    1716 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 15:46:28.698660    1716 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 15:46:28.698684    1716 cni.go:84] Creating CNI manager for ""
	I0729 15:46:28.698702    1716 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 15:46:28.698747    1716 start.go:340] cluster config:
	{Name:download-only-541000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-541000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 15:46:28.704075    1716 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 15:46:28.708864    1716 out.go:97] Downloading VM boot image ...
	I0729 15:46:28.708884    1716 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 15:46:42.637788    1716 out.go:97] Starting "download-only-541000" primary control-plane node in "download-only-541000" cluster
	I0729 15:46:42.637814    1716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 15:46:42.694845    1716 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 15:46:42.694851    1716 cache.go:56] Caching tarball of preloaded images
	I0729 15:46:42.695016    1716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 15:46:42.703091    1716 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 15:46:42.703098    1716 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:46:42.784176    1716 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 15:46:55.069837    1716 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:46:55.070013    1716 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:46:55.765105    1716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 15:46:55.765327    1716 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/download-only-541000/config.json ...
	I0729 15:46:55.765344    1716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/download-only-541000/config.json: {Name:mk2ee03f076dca51dba3a4685e9347d82f2f98bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 15:46:55.765575    1716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 15:46:55.765775    1716 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 15:46:56.255250    1716 out.go:169] 
	W0729 15:46:56.260390    1716 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60] Decompressors:map[bz2:0x14000711cf0 gz:0x14000711cf8 tar:0x14000711ca0 tar.bz2:0x14000711cb0 tar.gz:0x14000711cc0 tar.xz:0x14000711cd0 tar.zst:0x14000711ce0 tbz2:0x14000711cb0 tgz:0x14000711cc0 txz:0x14000711cd0 tzst:0x14000711ce0 xz:0x14000711d00 zip:0x14000711d10 zst:0x14000711d08] Getters:map[file:0x140000c2d10 http:0x140000b4550 https:0x140000b45a0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 15:46:56.260416    1716 out_reason.go:110] 
	W0729 15:46:56.268314    1716 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 15:46:56.271234    1716 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-541000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (27.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-871000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-871000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.805889916s)

                                                
                                                
-- stdout --
	* [offline-docker-871000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-871000" primary control-plane node in "offline-docker-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:31:50.240722    4621 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:31:50.240876    4621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:31:50.240879    4621 out.go:304] Setting ErrFile to fd 2...
	I0729 16:31:50.240882    4621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:31:50.241021    4621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:31:50.242376    4621 out.go:298] Setting JSON to false
	I0729 16:31:50.259968    4621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3677,"bootTime":1722292233,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:31:50.260054    4621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:31:50.264936    4621 out.go:177] * [offline-docker-871000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:31:50.272862    4621 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:31:50.272892    4621 notify.go:220] Checking for updates...
	I0729 16:31:50.280818    4621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:31:50.283893    4621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:31:50.286868    4621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:31:50.289832    4621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:31:50.292840    4621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:31:50.296215    4621 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:31:50.296279    4621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:31:50.298810    4621 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:31:50.305943    4621 start.go:297] selected driver: qemu2
	I0729 16:31:50.305956    4621 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:31:50.305963    4621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:31:50.308023    4621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:31:50.310846    4621 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:31:50.313937    4621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:31:50.313968    4621 cni.go:84] Creating CNI manager for ""
	I0729 16:31:50.313975    4621 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:31:50.313979    4621 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:31:50.314013    4621 start.go:340] cluster config:
	{Name:offline-docker-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:31:50.317716    4621 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:31:50.324813    4621 out.go:177] * Starting "offline-docker-871000" primary control-plane node in "offline-docker-871000" cluster
	I0729 16:31:50.328837    4621 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:31:50.328866    4621 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:31:50.328877    4621 cache.go:56] Caching tarball of preloaded images
	I0729 16:31:50.328954    4621 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:31:50.328960    4621 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:31:50.329022    4621 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/offline-docker-871000/config.json ...
	I0729 16:31:50.329032    4621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/offline-docker-871000/config.json: {Name:mkc8fde066325fc89d8b327673bc3d32ed226ecf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:31:50.329328    4621 start.go:360] acquireMachinesLock for offline-docker-871000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:31:50.329360    4621 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "offline-docker-871000"
	I0729 16:31:50.329371    4621 start.go:93] Provisioning new machine with config: &{Name:offline-docker-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:31:50.329405    4621 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:31:50.333841    4621 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:31:50.349544    4621 start.go:159] libmachine.API.Create for "offline-docker-871000" (driver="qemu2")
	I0729 16:31:50.349572    4621 client.go:168] LocalClient.Create starting
	I0729 16:31:50.349641    4621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:31:50.349671    4621 main.go:141] libmachine: Decoding PEM data...
	I0729 16:31:50.349683    4621 main.go:141] libmachine: Parsing certificate...
	I0729 16:31:50.349727    4621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:31:50.349749    4621 main.go:141] libmachine: Decoding PEM data...
	I0729 16:31:50.349755    4621 main.go:141] libmachine: Parsing certificate...
	I0729 16:31:50.350115    4621 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:31:50.497150    4621 main.go:141] libmachine: Creating SSH key...
	I0729 16:31:50.662069    4621 main.go:141] libmachine: Creating Disk image...
	I0729 16:31:50.662078    4621 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:31:50.662307    4621 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2
	I0729 16:31:50.679402    4621 main.go:141] libmachine: STDOUT: 
	I0729 16:31:50.679425    4621 main.go:141] libmachine: STDERR: 
	I0729 16:31:50.679489    4621 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2 +20000M
	I0729 16:31:50.688605    4621 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:31:50.688625    4621 main.go:141] libmachine: STDERR: 
	I0729 16:31:50.688639    4621 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2
	I0729 16:31:50.688643    4621 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:31:50.688659    4621 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:31:50.688682    4621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:e0:4a:0f:cc:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2
	I0729 16:31:50.690545    4621 main.go:141] libmachine: STDOUT: 
	I0729 16:31:50.690562    4621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:31:50.690581    4621 client.go:171] duration metric: took 341.014542ms to LocalClient.Create
	I0729 16:31:52.692614    4621 start.go:128] duration metric: took 2.363266875s to createHost
	I0729 16:31:52.692652    4621 start.go:83] releasing machines lock for "offline-docker-871000", held for 2.363358333s
	W0729 16:31:52.692665    4621 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:31:52.713953    4621 out.go:177] * Deleting "offline-docker-871000" in qemu2 ...
	W0729 16:31:52.725437    4621 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:31:52.725486    4621 start.go:729] Will try again in 5 seconds ...
	I0729 16:31:57.727495    4621 start.go:360] acquireMachinesLock for offline-docker-871000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:31:57.727845    4621 start.go:364] duration metric: took 212.917µs to acquireMachinesLock for "offline-docker-871000"
	I0729 16:31:57.727922    4621 start.go:93] Provisioning new machine with config: &{Name:offline-docker-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:31:57.728115    4621 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:31:57.737571    4621 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:31:57.770092    4621 start.go:159] libmachine.API.Create for "offline-docker-871000" (driver="qemu2")
	I0729 16:31:57.770132    4621 client.go:168] LocalClient.Create starting
	I0729 16:31:57.770200    4621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:31:57.770233    4621 main.go:141] libmachine: Decoding PEM data...
	I0729 16:31:57.770241    4621 main.go:141] libmachine: Parsing certificate...
	I0729 16:31:57.770279    4621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:31:57.770301    4621 main.go:141] libmachine: Decoding PEM data...
	I0729 16:31:57.770306    4621 main.go:141] libmachine: Parsing certificate...
	I0729 16:31:57.770599    4621 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:31:57.912282    4621 main.go:141] libmachine: Creating SSH key...
	I0729 16:31:57.955170    4621 main.go:141] libmachine: Creating Disk image...
	I0729 16:31:57.955182    4621 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:31:57.955366    4621 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2
	I0729 16:31:57.964389    4621 main.go:141] libmachine: STDOUT: 
	I0729 16:31:57.964409    4621 main.go:141] libmachine: STDERR: 
	I0729 16:31:57.964470    4621 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2 +20000M
	I0729 16:31:57.972503    4621 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:31:57.972528    4621 main.go:141] libmachine: STDERR: 
	I0729 16:31:57.972539    4621 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2
	I0729 16:31:57.972544    4621 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:31:57.972551    4621 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:31:57.972583    4621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:03:53:b2:8e:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/offline-docker-871000/disk.qcow2
	I0729 16:31:57.974167    4621 main.go:141] libmachine: STDOUT: 
	I0729 16:31:57.974184    4621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:31:57.974195    4621 client.go:171] duration metric: took 204.065125ms to LocalClient.Create
	I0729 16:31:59.976341    4621 start.go:128] duration metric: took 2.24825725s to createHost
	I0729 16:31:59.976416    4621 start.go:83] releasing machines lock for "offline-docker-871000", held for 2.248613s
	W0729 16:31:59.976816    4621 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:31:59.986213    4621 out.go:177] 
	W0729 16:31:59.990437    4621 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:31:59.990470    4621 out.go:239] * 
	* 
	W0729 16:31:59.993178    4621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:32:00.001370    4621 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-871000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-29 16:32:00.019457 -0700 PDT m=+2731.568249001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-871000 -n offline-docker-871000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-871000 -n offline-docker-871000: exit status 7 (56.76025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-871000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-871000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-871000
--- FAIL: TestOffline (9.95s)

                                                
                                    
x
+
TestCertOptions (11.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-528000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-528000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (11.696804917s)

                                                
                                                
-- stdout --
	* [cert-options-528000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-528000" primary control-plane node in "cert-options-528000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-528000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-528000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-528000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-528000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-528000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (76.769916ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-528000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-528000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-528000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-528000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-528000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-528000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.8ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-528000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-528000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-528000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-528000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-528000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-29 16:32:34.696049 -0700 PDT m=+2766.245887459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-528000 -n cert-options-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-528000 -n cert-options-528000: exit status 7 (29.789791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-528000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-528000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-528000
--- FAIL: TestCertOptions (11.96s)

                                                
                                    
x
+
TestCertExpiration (197.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-870000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-870000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.011372917s)

                                                
                                                
-- stdout --
	* [cert-expiration-870000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-870000" primary control-plane node in "cert-expiration-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-870000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-870000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-870000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.230759333s)

                                                
                                                
-- stdout --
	* [cert-expiration-870000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-870000" primary control-plane node in "cert-expiration-870000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-870000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-870000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-870000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-870000" primary control-plane node in "cert-expiration-870000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-870000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 16:35:37.467624 -0700 PDT m=+2949.022978917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-870000 -n cert-expiration-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-870000 -n cert-expiration-870000: exit status 7 (65.851625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-870000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-870000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-870000
--- FAIL: TestCertExpiration (197.39s)

                                                
                                    
x
+
TestDockerFlags (12.56s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-942000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-942000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.165365959s)

                                                
                                                
-- stdout --
	* [docker-flags-942000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-942000" primary control-plane node in "docker-flags-942000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-942000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:32:10.324371    4823 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:32:10.324505    4823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:10.324508    4823 out.go:304] Setting ErrFile to fd 2...
	I0729 16:32:10.324511    4823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:10.324648    4823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:32:10.325763    4823 out.go:298] Setting JSON to false
	I0729 16:32:10.342392    4823 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3697,"bootTime":1722292233,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:32:10.342460    4823 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:32:10.362411    4823 out.go:177] * [docker-flags-942000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:32:10.370212    4823 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:32:10.370239    4823 notify.go:220] Checking for updates...
	I0729 16:32:10.378273    4823 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:32:10.382258    4823 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:32:10.385289    4823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:32:10.388281    4823 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:32:10.391238    4823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:32:10.394524    4823 config.go:182] Loaded profile config "force-systemd-flag-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:32:10.394588    4823 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:32:10.394640    4823 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:32:10.398327    4823 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:32:10.405266    4823 start.go:297] selected driver: qemu2
	I0729 16:32:10.405272    4823 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:32:10.405278    4823 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:32:10.407536    4823 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:32:10.410223    4823 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:32:10.413343    4823 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 16:32:10.413374    4823 cni.go:84] Creating CNI manager for ""
	I0729 16:32:10.413387    4823 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:32:10.413395    4823 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:32:10.413430    4823 start.go:340] cluster config:
	{Name:docker-flags-942000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-942000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:32:10.417018    4823 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:32:10.421279    4823 out.go:177] * Starting "docker-flags-942000" primary control-plane node in "docker-flags-942000" cluster
	I0729 16:32:10.429337    4823 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:32:10.429365    4823 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:32:10.429374    4823 cache.go:56] Caching tarball of preloaded images
	I0729 16:32:10.429445    4823 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:32:10.429450    4823 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:32:10.429516    4823 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/docker-flags-942000/config.json ...
	I0729 16:32:10.429531    4823 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/docker-flags-942000/config.json: {Name:mkb1e5d9b27a63518483da1db76a3ea3b2e4bf6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:32:10.429803    4823 start.go:360] acquireMachinesLock for docker-flags-942000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:32:12.430290    4823 start.go:364] duration metric: took 2.000518292s to acquireMachinesLock for "docker-flags-942000"
	I0729 16:32:12.430530    4823 start.go:93] Provisioning new machine with config: &{Name:docker-flags-942000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-942000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:32:12.430747    4823 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:32:12.441383    4823 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:32:12.493043    4823 start.go:159] libmachine.API.Create for "docker-flags-942000" (driver="qemu2")
	I0729 16:32:12.493091    4823 client.go:168] LocalClient.Create starting
	I0729 16:32:12.493209    4823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:32:12.493267    4823 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:12.493286    4823 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:12.493358    4823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:32:12.493401    4823 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:12.493417    4823 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:12.494208    4823 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:32:12.652099    4823 main.go:141] libmachine: Creating SSH key...
	I0729 16:32:12.834711    4823 main.go:141] libmachine: Creating Disk image...
	I0729 16:32:12.834718    4823 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:32:12.834970    4823 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2
	I0729 16:32:12.844601    4823 main.go:141] libmachine: STDOUT: 
	I0729 16:32:12.844627    4823 main.go:141] libmachine: STDERR: 
	I0729 16:32:12.844679    4823 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2 +20000M
	I0729 16:32:12.852742    4823 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:32:12.852755    4823 main.go:141] libmachine: STDERR: 
	I0729 16:32:12.852773    4823 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2
	I0729 16:32:12.852776    4823 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:32:12.852790    4823 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:32:12.852815    4823 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:0f:8b:b1:4e:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2
	I0729 16:32:12.854381    4823 main.go:141] libmachine: STDOUT: 
	I0729 16:32:12.854393    4823 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:32:12.854409    4823 client.go:171] duration metric: took 361.322708ms to LocalClient.Create
	I0729 16:32:14.856577    4823 start.go:128] duration metric: took 2.425862s to createHost
	I0729 16:32:14.856701    4823 start.go:83] releasing machines lock for "docker-flags-942000", held for 2.426444708s
	W0729 16:32:14.856809    4823 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:14.865126    4823 out.go:177] * Deleting "docker-flags-942000" in qemu2 ...
	W0729 16:32:14.897919    4823 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:14.897956    4823 start.go:729] Will try again in 5 seconds ...
	I0729 16:32:19.900568    4823 start.go:360] acquireMachinesLock for docker-flags-942000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:32:19.900787    4823 start.go:364] duration metric: took 168.916µs to acquireMachinesLock for "docker-flags-942000"
	I0729 16:32:19.900833    4823 start.go:93] Provisioning new machine with config: &{Name:docker-flags-942000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-942000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:32:19.900974    4823 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:32:19.906040    4823 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:32:19.938836    4823 start.go:159] libmachine.API.Create for "docker-flags-942000" (driver="qemu2")
	I0729 16:32:19.938871    4823 client.go:168] LocalClient.Create starting
	I0729 16:32:19.938931    4823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:32:19.938981    4823 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:19.938994    4823 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:19.939040    4823 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:32:19.939064    4823 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:19.939073    4823 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:19.939474    4823 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:32:20.264962    4823 main.go:141] libmachine: Creating SSH key...
	I0729 16:32:20.389780    4823 main.go:141] libmachine: Creating Disk image...
	I0729 16:32:20.389791    4823 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:32:20.389958    4823 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2
	I0729 16:32:20.398735    4823 main.go:141] libmachine: STDOUT: 
	I0729 16:32:20.398766    4823 main.go:141] libmachine: STDERR: 
	I0729 16:32:20.398815    4823 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2 +20000M
	I0729 16:32:20.406681    4823 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:32:20.406708    4823 main.go:141] libmachine: STDERR: 
	I0729 16:32:20.406719    4823 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2
	I0729 16:32:20.406725    4823 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:32:20.406733    4823 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:32:20.406775    4823 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:27:0b:58:b7:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/docker-flags-942000/disk.qcow2
	I0729 16:32:20.408423    4823 main.go:141] libmachine: STDOUT: 
	I0729 16:32:20.408447    4823 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:32:20.408460    4823 client.go:171] duration metric: took 469.599458ms to LocalClient.Create
	I0729 16:32:22.410587    4823 start.go:128] duration metric: took 2.509655083s to createHost
	I0729 16:32:22.410661    4823 start.go:83] releasing machines lock for "docker-flags-942000", held for 2.509930333s
	W0729 16:32:22.410997    4823 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-942000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-942000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:22.425611    4823 out.go:177] 
	W0729 16:32:22.429681    4823 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:32:22.429714    4823 out.go:239] * 
	* 
	W0729 16:32:22.440097    4823 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:32:22.444610    4823 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-942000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-942000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-942000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (93.527584ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-942000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-942000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-942000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-942000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-942000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-942000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-942000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-942000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-942000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (98.725916ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-942000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-942000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-942000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-942000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-942000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-942000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 16:32:22.64781 -0700 PDT m=+2754.197284251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-942000 -n docker-flags-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-942000 -n docker-flags-942000: exit status 7 (36.056875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-942000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-942000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-942000
--- FAIL: TestDockerFlags (12.56s)

                                                
                                    
x
+
TestForceSystemdFlag (12.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-163000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-163000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.895311208s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-163000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-163000" primary control-plane node in "force-systemd-flag-163000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-163000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:32:08.014459    4807 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:32:08.014608    4807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:08.014612    4807 out.go:304] Setting ErrFile to fd 2...
	I0729 16:32:08.014614    4807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:08.014746    4807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:32:08.015760    4807 out.go:298] Setting JSON to false
	I0729 16:32:08.031908    4807 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3695,"bootTime":1722292233,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:32:08.031975    4807 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:32:08.037868    4807 out.go:177] * [force-systemd-flag-163000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:32:08.044788    4807 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:32:08.044846    4807 notify.go:220] Checking for updates...
	I0729 16:32:08.053803    4807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:32:08.057734    4807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:32:08.061760    4807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:32:08.064789    4807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:32:08.067738    4807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:32:08.071056    4807 config.go:182] Loaded profile config "force-systemd-env-113000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:32:08.071131    4807 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:32:08.071188    4807 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:32:08.074780    4807 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:32:08.081764    4807 start.go:297] selected driver: qemu2
	I0729 16:32:08.081770    4807 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:32:08.081775    4807 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:32:08.084098    4807 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:32:08.087738    4807 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:32:08.090806    4807 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:32:08.090847    4807 cni.go:84] Creating CNI manager for ""
	I0729 16:32:08.090853    4807 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:32:08.090858    4807 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:32:08.090898    4807 start.go:340] cluster config:
	{Name:force-systemd-flag-163000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:32:08.094642    4807 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:32:08.102854    4807 out.go:177] * Starting "force-systemd-flag-163000" primary control-plane node in "force-systemd-flag-163000" cluster
	I0729 16:32:08.106748    4807 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:32:08.106780    4807 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:32:08.106794    4807 cache.go:56] Caching tarball of preloaded images
	I0729 16:32:08.106860    4807 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:32:08.106866    4807 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:32:08.106944    4807 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/force-systemd-flag-163000/config.json ...
	I0729 16:32:08.106962    4807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/force-systemd-flag-163000/config.json: {Name:mk10456af553e8647c6953f1f6ade74e058ff1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:32:08.107178    4807 start.go:360] acquireMachinesLock for force-systemd-flag-163000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:32:09.920583    4807 start.go:364] duration metric: took 1.813425625s to acquireMachinesLock for "force-systemd-flag-163000"
	I0729 16:32:09.920767    4807 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:32:09.920991    4807 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:32:09.930319    4807 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:32:09.981545    4807 start.go:159] libmachine.API.Create for "force-systemd-flag-163000" (driver="qemu2")
	I0729 16:32:09.981597    4807 client.go:168] LocalClient.Create starting
	I0729 16:32:09.981733    4807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:32:09.981793    4807 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:09.981809    4807 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:09.981872    4807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:32:09.981915    4807 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:09.981933    4807 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:09.982552    4807 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:32:10.318367    4807 main.go:141] libmachine: Creating SSH key...
	I0729 16:32:10.406361    4807 main.go:141] libmachine: Creating Disk image...
	I0729 16:32:10.406370    4807 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:32:10.406534    4807 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0729 16:32:10.415875    4807 main.go:141] libmachine: STDOUT: 
	I0729 16:32:10.415895    4807 main.go:141] libmachine: STDERR: 
	I0729 16:32:10.415953    4807 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2 +20000M
	I0729 16:32:10.426120    4807 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:32:10.426134    4807 main.go:141] libmachine: STDERR: 
	I0729 16:32:10.426154    4807 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0729 16:32:10.426162    4807 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:32:10.426176    4807 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:32:10.426203    4807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:f3:85:db:35:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0729 16:32:10.427858    4807 main.go:141] libmachine: STDOUT: 
	I0729 16:32:10.427872    4807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:32:10.427894    4807 client.go:171] duration metric: took 446.304334ms to LocalClient.Create
	I0729 16:32:12.430058    4807 start.go:128] duration metric: took 2.509061917s to createHost
	I0729 16:32:12.430134    4807 start.go:83] releasing machines lock for "force-systemd-flag-163000", held for 2.509594417s
	W0729 16:32:12.430260    4807 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:12.449399    4807 out.go:177] * Deleting "force-systemd-flag-163000" in qemu2 ...
	W0729 16:32:12.471858    4807 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:12.471885    4807 start.go:729] Will try again in 5 seconds ...
	I0729 16:32:17.473901    4807 start.go:360] acquireMachinesLock for force-systemd-flag-163000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:32:17.474272    4807 start.go:364] duration metric: took 284.167µs to acquireMachinesLock for "force-systemd-flag-163000"
	I0729 16:32:17.474409    4807 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:32:17.474649    4807 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:32:17.484228    4807 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:32:17.530081    4807 start.go:159] libmachine.API.Create for "force-systemd-flag-163000" (driver="qemu2")
	I0729 16:32:17.530136    4807 client.go:168] LocalClient.Create starting
	I0729 16:32:17.530249    4807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:32:17.530317    4807 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:17.530333    4807 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:17.530398    4807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:32:17.530449    4807 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:17.530459    4807 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:17.530977    4807 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:32:17.689856    4807 main.go:141] libmachine: Creating SSH key...
	I0729 16:32:17.819033    4807 main.go:141] libmachine: Creating Disk image...
	I0729 16:32:17.819039    4807 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:32:17.819208    4807 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0729 16:32:17.828321    4807 main.go:141] libmachine: STDOUT: 
	I0729 16:32:17.828339    4807 main.go:141] libmachine: STDERR: 
	I0729 16:32:17.828389    4807 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2 +20000M
	I0729 16:32:17.836353    4807 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:32:17.836379    4807 main.go:141] libmachine: STDERR: 
	I0729 16:32:17.836389    4807 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0729 16:32:17.836401    4807 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:32:17.836412    4807 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:32:17.836443    4807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:68:65:ab:be:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-flag-163000/disk.qcow2
	I0729 16:32:17.838080    4807 main.go:141] libmachine: STDOUT: 
	I0729 16:32:17.838106    4807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:32:17.838119    4807 client.go:171] duration metric: took 307.987667ms to LocalClient.Create
	I0729 16:32:19.840266    4807 start.go:128] duration metric: took 2.365650375s to createHost
	I0729 16:32:19.840375    4807 start.go:83] releasing machines lock for "force-systemd-flag-163000", held for 2.366127667s
	W0729 16:32:19.840763    4807 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-163000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-163000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:19.848999    4807 out.go:177] 
	W0729 16:32:19.856115    4807 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:32:19.856147    4807 out.go:239] * 
	* 
	W0729 16:32:19.859024    4807 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:32:19.869013    4807 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-163000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-163000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-163000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (88.381375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-163000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-163000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-163000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 16:32:19.974142 -0700 PDT m=+2751.523535667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-163000 -n force-systemd-flag-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-163000 -n force-systemd-flag-163000: exit status 7 (37.322459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-163000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-163000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-163000
--- FAIL: TestForceSystemdFlag (12.24s)

                                                
                                    
x
+
TestForceSystemdEnv (10.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-113000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-113000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.80500475s)

                                                
                                                
-- stdout --
	* [force-systemd-env-113000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-113000" primary control-plane node in "force-systemd-env-113000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-113000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:32:00.185442    4761 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:32:00.185575    4761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:00.185579    4761 out.go:304] Setting ErrFile to fd 2...
	I0729 16:32:00.185581    4761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:32:00.185709    4761 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:32:00.186746    4761 out.go:298] Setting JSON to false
	I0729 16:32:00.202846    4761 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3687,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:32:00.202912    4761 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:32:00.208474    4761 out.go:177] * [force-systemd-env-113000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:32:00.215373    4761 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:32:00.215409    4761 notify.go:220] Checking for updates...
	I0729 16:32:00.223325    4761 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:32:00.226391    4761 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:32:00.229390    4761 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:32:00.232331    4761 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:32:00.235360    4761 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 16:32:00.238682    4761 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:32:00.238729    4761 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:32:00.243361    4761 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:32:00.250392    4761 start.go:297] selected driver: qemu2
	I0729 16:32:00.250401    4761 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:32:00.250410    4761 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:32:00.252836    4761 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:32:00.257342    4761 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:32:00.260439    4761 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:32:00.260488    4761 cni.go:84] Creating CNI manager for ""
	I0729 16:32:00.260497    4761 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:32:00.260501    4761 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:32:00.260528    4761 start.go:340] cluster config:
	{Name:force-systemd-env-113000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-113000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:32:00.264234    4761 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:32:00.271380    4761 out.go:177] * Starting "force-systemd-env-113000" primary control-plane node in "force-systemd-env-113000" cluster
	I0729 16:32:00.275388    4761 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:32:00.275405    4761 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:32:00.275415    4761 cache.go:56] Caching tarball of preloaded images
	I0729 16:32:00.275496    4761 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:32:00.275507    4761 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:32:00.275576    4761 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/force-systemd-env-113000/config.json ...
	I0729 16:32:00.275587    4761 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/force-systemd-env-113000/config.json: {Name:mke6ba32ac0f79873f6c5274f3f5353a6212830d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:32:00.275812    4761 start.go:360] acquireMachinesLock for force-systemd-env-113000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:32:00.275850    4761 start.go:364] duration metric: took 30.625µs to acquireMachinesLock for "force-systemd-env-113000"
	I0729 16:32:00.275863    4761 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-113000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-113000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:32:00.275895    4761 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:32:00.279376    4761 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:32:00.297029    4761 start.go:159] libmachine.API.Create for "force-systemd-env-113000" (driver="qemu2")
	I0729 16:32:00.297056    4761 client.go:168] LocalClient.Create starting
	I0729 16:32:00.297112    4761 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:32:00.297143    4761 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:00.297152    4761 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:00.297196    4761 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:32:00.297219    4761 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:00.297227    4761 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:00.297595    4761 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:32:00.443505    4761 main.go:141] libmachine: Creating SSH key...
	I0729 16:32:00.499761    4761 main.go:141] libmachine: Creating Disk image...
	I0729 16:32:00.499771    4761 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:32:00.499941    4761 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2
	I0729 16:32:00.508935    4761 main.go:141] libmachine: STDOUT: 
	I0729 16:32:00.508956    4761 main.go:141] libmachine: STDERR: 
	I0729 16:32:00.509007    4761 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2 +20000M
	I0729 16:32:00.516704    4761 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:32:00.516716    4761 main.go:141] libmachine: STDERR: 
	I0729 16:32:00.516731    4761 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2
	I0729 16:32:00.516736    4761 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:32:00.516747    4761 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:32:00.516770    4761 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:bf:5b:10:73:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2
	I0729 16:32:00.518297    4761 main.go:141] libmachine: STDOUT: 
	I0729 16:32:00.518308    4761 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:32:00.518330    4761 client.go:171] duration metric: took 221.276917ms to LocalClient.Create
	I0729 16:32:02.520486    4761 start.go:128] duration metric: took 2.244630375s to createHost
	I0729 16:32:02.520577    4761 start.go:83] releasing machines lock for "force-systemd-env-113000", held for 2.244784667s
	W0729 16:32:02.520685    4761 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:02.535862    4761 out.go:177] * Deleting "force-systemd-env-113000" in qemu2 ...
	W0729 16:32:02.562634    4761 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:02.562671    4761 start.go:729] Will try again in 5 seconds ...
	I0729 16:32:07.562928    4761 start.go:360] acquireMachinesLock for force-systemd-env-113000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:32:07.563406    4761 start.go:364] duration metric: took 386.958µs to acquireMachinesLock for "force-systemd-env-113000"
	I0729 16:32:07.563539    4761 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-113000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-113000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:32:07.563830    4761 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:32:07.569384    4761 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 16:32:07.620535    4761 start.go:159] libmachine.API.Create for "force-systemd-env-113000" (driver="qemu2")
	I0729 16:32:07.620588    4761 client.go:168] LocalClient.Create starting
	I0729 16:32:07.620703    4761 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:32:07.620756    4761 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:07.620773    4761 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:07.620838    4761 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:32:07.620883    4761 main.go:141] libmachine: Decoding PEM data...
	I0729 16:32:07.620899    4761 main.go:141] libmachine: Parsing certificate...
	I0729 16:32:07.621432    4761 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:32:07.781240    4761 main.go:141] libmachine: Creating SSH key...
	I0729 16:32:07.896957    4761 main.go:141] libmachine: Creating Disk image...
	I0729 16:32:07.896964    4761 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:32:07.897160    4761 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2
	I0729 16:32:07.907123    4761 main.go:141] libmachine: STDOUT: 
	I0729 16:32:07.907145    4761 main.go:141] libmachine: STDERR: 
	I0729 16:32:07.907203    4761 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2 +20000M
	I0729 16:32:07.916121    4761 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:32:07.916141    4761 main.go:141] libmachine: STDERR: 
	I0729 16:32:07.916154    4761 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2
	I0729 16:32:07.916159    4761 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:32:07.916175    4761 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:32:07.916214    4761 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:dd:c4:6a:b7:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/force-systemd-env-113000/disk.qcow2
	I0729 16:32:07.918138    4761 main.go:141] libmachine: STDOUT: 
	I0729 16:32:07.918155    4761 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:32:07.918167    4761 client.go:171] duration metric: took 297.582416ms to LocalClient.Create
	I0729 16:32:09.920312    4761 start.go:128] duration metric: took 2.356513917s to createHost
	I0729 16:32:09.920400    4761 start.go:83] releasing machines lock for "force-systemd-env-113000", held for 2.3570405s
	W0729 16:32:09.920769    4761 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-113000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-113000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:32:09.934297    4761 out.go:177] 
	W0729 16:32:09.939448    4761 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:32:09.939482    4761 out.go:239] * 
	* 
	W0729 16:32:09.942158    4761 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:32:09.950316    4761 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-113000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-113000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-113000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (95.67125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-113000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-113000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-113000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 16:32:10.05956 -0700 PDT m=+2741.608655042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-113000 -n force-systemd-env-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-113000 -n force-systemd-env-113000: exit status 7 (39.445209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-113000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-113000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-113000
--- FAIL: TestForceSystemdEnv (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (38.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-905000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-905000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-c76m8" [ebee093f-b68e-49b6-9c5a-b4adedc1c159] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-c76m8" [ebee093f-b68e-49b6-9c5a-b4adedc1c159] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003874875s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:30389
functional_test.go:1657: error fetching http://192.168.105.4:30389: Get "http://192.168.105.4:30389": dial tcp 192.168.105.4:30389: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30389: Get "http://192.168.105.4:30389": dial tcp 192.168.105.4:30389: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30389: Get "http://192.168.105.4:30389": dial tcp 192.168.105.4:30389: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30389: Get "http://192.168.105.4:30389": dial tcp 192.168.105.4:30389: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30389: Get "http://192.168.105.4:30389": dial tcp 192.168.105.4:30389: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30389: Get "http://192.168.105.4:30389": dial tcp 192.168.105.4:30389: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30389: Get "http://192.168.105.4:30389": dial tcp 192.168.105.4:30389: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30389: Get "http://192.168.105.4:30389": dial tcp 192.168.105.4:30389: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:30389: Get "http://192.168.105.4:30389": dial tcp 192.168.105.4:30389: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-905000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-c76m8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-905000/192.168.105.4
Start Time:       Mon, 29 Jul 2024 15:56:32 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://6f4be0fe11050ed8ba8fe4837c39410997570aafcfc502596a83c5878ba02099
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 29 Jul 2024 15:56:48 -0700
Finished:     Mon, 29 Jul 2024 15:56:48 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fh55w (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fh55w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  37s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-c76m8 to functional-905000
Normal   Pulled     21s (x3 over 36s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    21s (x3 over 36s)  kubelet            Created container echoserver-arm
Normal   Started    21s (x3 over 36s)  kubelet            Started container echoserver-arm
Warning  BackOff    7s (x3 over 34s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-c76m8_default(ebee093f-b68e-49b6-9c5a-b4adedc1c159)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-905000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-905000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.211.33
IPs:                      10.99.211.33
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30389/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-905000 -n functional-905000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-905000                                                                                                 | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:56 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3955146712/001:/mount-9p      |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh findmnt                                                                                        | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:56 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh findmnt                                                                                        | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:56 PDT | 29 Jul 24 15:56 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh -- ls                                                                                          | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:56 PDT | 29 Jul 24 15:56 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh cat                                                                                            | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:56 PDT | 29 Jul 24 15:56 PDT |
	|           | /mount-9p/test-1722293815803207000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh stat                                                                                           | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT | 29 Jul 24 15:57 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh stat                                                                                           | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT | 29 Jul 24 15:57 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh sudo                                                                                           | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT | 29 Jul 24 15:57 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh findmnt                                                                                        | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-905000                                                                                                 | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3208121530/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh findmnt                                                                                        | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT | 29 Jul 24 15:57 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh -- ls                                                                                          | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT | 29 Jul 24 15:57 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh sudo                                                                                           | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-905000                                                                                                 | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2103708035/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-905000                                                                                                 | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2103708035/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh findmnt                                                                                        | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-905000                                                                                                 | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2103708035/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh findmnt                                                                                        | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT | 29 Jul 24 15:57 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh findmnt                                                                                        | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT | 29 Jul 24 15:57 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-905000 ssh findmnt                                                                                        | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT | 29 Jul 24 15:57 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-905000                                                                                                 | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-905000                                                                                                 | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-905000 --dry-run                                                                                       | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-905000                                                                                                 | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-905000 | jenkins | v1.33.1 | 29 Jul 24 15:57 PDT |                     |
	|           | -p functional-905000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 15:57:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 15:57:03.602780    2662 out.go:291] Setting OutFile to fd 1 ...
	I0729 15:57:03.602920    2662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:57:03.602923    2662 out.go:304] Setting ErrFile to fd 2...
	I0729 15:57:03.602925    2662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:57:03.603061    2662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 15:57:03.604448    2662 out.go:298] Setting JSON to false
	I0729 15:57:03.621473    2662 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1590,"bootTime":1722292233,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 15:57:03.621565    2662 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 15:57:03.623413    2662 out.go:177] * [functional-905000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0729 15:57:03.630277    2662 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 15:57:03.630333    2662 notify.go:220] Checking for updates...
	I0729 15:57:03.637225    2662 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 15:57:03.640201    2662 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 15:57:03.647177    2662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 15:57:03.655229    2662 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 15:57:03.659185    2662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 15:57:03.662450    2662 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 15:57:03.662695    2662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 15:57:03.667273    2662 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0729 15:57:03.674167    2662 start.go:297] selected driver: qemu2
	I0729 15:57:03.674172    2662 start.go:901] validating driver "qemu2" against &{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 15:57:03.674217    2662 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 15:57:03.679173    2662 out.go:177] 
	W0729 15:57:03.683167    2662 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 15:57:03.687254    2662 out.go:177] 
	
	
	==> Docker <==
	Jul 29 22:57:04 functional-905000 dockerd[6110]: time="2024-07-29T22:57:04.525274918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 22:57:04 functional-905000 dockerd[6110]: time="2024-07-29T22:57:04.525339293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 22:57:04 functional-905000 dockerd[6110]: time="2024-07-29T22:57:04.525356835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 22:57:04 functional-905000 dockerd[6110]: time="2024-07-29T22:57:04.525447752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 22:57:04 functional-905000 dockerd[6110]: time="2024-07-29T22:57:04.552140398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 22:57:04 functional-905000 dockerd[6110]: time="2024-07-29T22:57:04.552187023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 22:57:04 functional-905000 dockerd[6110]: time="2024-07-29T22:57:04.552195440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 22:57:04 functional-905000 dockerd[6110]: time="2024-07-29T22:57:04.552323274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 22:57:04 functional-905000 cri-dockerd[6355]: time="2024-07-29T22:57:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8eea21d78c3b13e228714f378710805860969b2fe96268477cfb5d10fb46a70d/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 22:57:04 functional-905000 cri-dockerd[6355]: time="2024-07-29T22:57:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/95838488966dc4f8e91ffda13ccd473a9ef56a5967bf1d9cdfaebe0ddcc73cf0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 22:57:04 functional-905000 dockerd[6102]: time="2024-07-29T22:57:04.822915620Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Jul 29 22:57:05 functional-905000 dockerd[6110]: time="2024-07-29T22:57:05.705768727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 22:57:05 functional-905000 dockerd[6110]: time="2024-07-29T22:57:05.705934810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 22:57:05 functional-905000 dockerd[6110]: time="2024-07-29T22:57:05.705951435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 22:57:05 functional-905000 dockerd[6110]: time="2024-07-29T22:57:05.706016227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 22:57:05 functional-905000 dockerd[6102]: time="2024-07-29T22:57:05.727444394Z" level=info msg="ignoring event" container=7daa6e1260f5b99641c50d8bb7119372b8eafa21b5b56ce56204e3d26b0ae10e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 22:57:05 functional-905000 dockerd[6110]: time="2024-07-29T22:57:05.727495019Z" level=info msg="shim disconnected" id=7daa6e1260f5b99641c50d8bb7119372b8eafa21b5b56ce56204e3d26b0ae10e namespace=moby
	Jul 29 22:57:05 functional-905000 dockerd[6110]: time="2024-07-29T22:57:05.727608352Z" level=warning msg="cleaning up after shim disconnected" id=7daa6e1260f5b99641c50d8bb7119372b8eafa21b5b56ce56204e3d26b0ae10e namespace=moby
	Jul 29 22:57:05 functional-905000 dockerd[6110]: time="2024-07-29T22:57:05.727613102Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 22:57:09 functional-905000 cri-dockerd[6355]: time="2024-07-29T22:57:09Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Jul 29 22:57:09 functional-905000 dockerd[6110]: time="2024-07-29T22:57:09.293178731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 22:57:09 functional-905000 dockerd[6110]: time="2024-07-29T22:57:09.293247981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 22:57:09 functional-905000 dockerd[6110]: time="2024-07-29T22:57:09.293262606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 22:57:09 functional-905000 dockerd[6110]: time="2024-07-29T22:57:09.293580647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 22:57:09 functional-905000 dockerd[6102]: time="2024-07-29T22:57:09.406381931Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fff35173a29e4       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        1 second ago         Running             kubernetes-dashboard      0                   8eea21d78c3b1       kubernetes-dashboard-779776cb65-8cqxf
	7daa6e1260f5b       72565bf5bbedf                                                                                         5 seconds ago        Exited              echoserver-arm            3                   060af13ca82c5       hello-node-65f5d5cc78-kq27q
	b1f938f3487f4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   11 seconds ago       Exited              mount-munger              0                   eb0895784fd6c       busybox-mount
	fcbf14c90f0f9       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         21 seconds ago       Running             myfrontend                0                   07dbcd19c1249       sp-pod
	6f4be0fe11050       72565bf5bbedf                                                                                         22 seconds ago       Exited              echoserver-arm            2                   942a457d168fa       hello-node-connect-6f49f58cd5-c76m8
	a534aac39ad59       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         44 seconds ago       Running             nginx                     0                   ee407e146eb2a       nginx-svc
	83355d37c7187       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   a5ff54018ebb4       kube-proxy-5fhls
	6131f6d00ffb5       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   03a001b3a5b75       coredns-7db6d8ff4d-mfnc6
	c52a98f4092b1       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   d0224737cc3d8       storage-provisioner
	bbf8db85f466d       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   96193a517d697       kube-controller-manager-functional-905000
	c8361af9c9c13       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   7315c98a25797       kube-scheduler-functional-905000
	de01a0c75d994       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   9751910632b3d       etcd-functional-905000
	a5ecc8ed74fc8       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   526b78dc016cb       kube-apiserver-functional-905000
	b2b6e4e1974a7       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   093284da82103       storage-provisioner
	08d900a09a4a7       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   3b7216ffa31d5       coredns-7db6d8ff4d-mfnc6
	e93fb8df87eb8       2351f570ed0ea                                                                                         2 minutes ago        Exited              kube-proxy                1                   e81e4b3279cdf       kube-proxy-5fhls
	6d0a515b3b827       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   74d0e4c9fab43       kube-controller-manager-functional-905000
	11adba82d7c24       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   71804dccbebed       etcd-functional-905000
	43fc8a2195dd4       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   e8ab4fcc92d36       kube-scheduler-functional-905000
	
	
	==> coredns [08d900a09a4a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59602 - 16174 "HINFO IN 5037964121828203986.2039701997046833890. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014782927s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6131f6d00ffb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42619 - 41368 "HINFO IN 102181315460028879.5739277686755045910. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009037207s
	[INFO] 10.244.0.1:8427 - 57919 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000100875s
	[INFO] 10.244.0.1:10407 - 23168 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000101166s
	[INFO] 10.244.0.1:5099 - 34952 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000029125s
	[INFO] 10.244.0.1:62129 - 34781 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001693538s
	[INFO] 10.244.0.1:37600 - 1736 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000104749s
	[INFO] 10.244.0.1:63254 - 21891 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000178125s
	
	
	==> describe nodes <==
	Name:               functional-905000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-905000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9
	                    minikube.k8s.io/name=functional-905000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T15_54_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 22:54:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-905000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 22:57:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 22:56:53 +0000   Mon, 29 Jul 2024 22:54:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 22:56:53 +0000   Mon, 29 Jul 2024 22:54:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 22:56:53 +0000   Mon, 29 Jul 2024 22:54:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 22:56:53 +0000   Mon, 29 Jul 2024 22:54:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-905000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 c348366df39b4c28832f30335c9740d5
	  System UUID:                c348366df39b4c28832f30335c9740d5
	  Boot ID:                    60198fde-894f-4d5d-807d-ce6451daec71
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-kq27q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  default                     hello-node-connect-6f49f58cd5-c76m8          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kube-system                 coredns-7db6d8ff4d-mfnc6                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m21s
	  kube-system                 etcd-functional-905000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-apiserver-functional-905000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-functional-905000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-proxy-5fhls                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 kube-scheduler-functional-905000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-c5qv2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-8cqxf        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m20s                kube-proxy       
	  Normal  Starting                 76s                  kube-proxy       
	  Normal  Starting                 119s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m35s                kubelet          Node functional-905000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m35s                kubelet          Node functional-905000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s                kubelet          Node functional-905000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m35s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m31s                kubelet          Node functional-905000 status is now: NodeReady
	  Normal  RegisteredNode           2m22s                node-controller  Node functional-905000 event: Registered Node functional-905000 in Controller
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node functional-905000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node functional-905000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)  kubelet          Node functional-905000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           108s                 node-controller  Node functional-905000 event: Registered Node functional-905000 in Controller
	  Normal  Starting                 81s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s (x8 over 81s)    kubelet          Node functional-905000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s (x8 over 81s)    kubelet          Node functional-905000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s (x7 over 81s)    kubelet          Node functional-905000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  81s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                  node-controller  Node functional-905000 event: Registered Node functional-905000 in Controller
	
	
	==> dmesg <==
	[ +11.549960] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.092776] systemd-fstab-generator[5188]: Ignoring "noauto" option for root device
	[ +10.482795] systemd-fstab-generator[5620]: Ignoring "noauto" option for root device
	[  +0.053683] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.106828] systemd-fstab-generator[5653]: Ignoring "noauto" option for root device
	[  +0.089402] systemd-fstab-generator[5665]: Ignoring "noauto" option for root device
	[  +0.116765] systemd-fstab-generator[5679]: Ignoring "noauto" option for root device
	[  +5.103916] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.341800] systemd-fstab-generator[6308]: Ignoring "noauto" option for root device
	[  +0.092434] systemd-fstab-generator[6320]: Ignoring "noauto" option for root device
	[  +0.081957] systemd-fstab-generator[6332]: Ignoring "noauto" option for root device
	[  +0.083398] systemd-fstab-generator[6347]: Ignoring "noauto" option for root device
	[  +0.214724] systemd-fstab-generator[6514]: Ignoring "noauto" option for root device
	[  +1.478177] systemd-fstab-generator[6636]: Ignoring "noauto" option for root device
	[  +0.902478] kauditd_printk_skb: 159 callbacks suppressed
	[Jul29 22:56] kauditd_printk_skb: 71 callbacks suppressed
	[  +3.666245] systemd-fstab-generator[7674]: Ignoring "noauto" option for root device
	[  +4.160342] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.168068] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.237880] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.318160] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.279778] kauditd_printk_skb: 38 callbacks suppressed
	[ +17.645270] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 22:57] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.074429] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [11adba82d7c2] <==
	{"level":"info","ts":"2024-07-29T22:55:08.120948Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T22:55:09.217675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T22:55:09.217813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T22:55:09.217863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-29T22:55:09.217895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T22:55:09.217911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T22:55:09.217934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T22:55:09.218038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T22:55:09.2229Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-905000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T22:55:09.222994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T22:55:09.223755Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T22:55:09.223928Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T22:55:09.223784Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T22:55:09.227178Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-29T22:55:09.227257Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T22:55:35.230344Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T22:55:35.230374Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-905000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-29T22:55:35.230417Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T22:55:35.230459Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T22:55:35.241394Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T22:55:35.241415Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T22:55:35.241442Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-29T22:55:35.243403Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T22:55:35.243441Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T22:55:35.243446Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-905000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [de01a0c75d99] <==
	{"level":"info","ts":"2024-07-29T22:55:50.677609Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T22:55:50.67763Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T22:55:50.677765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-29T22:55:50.677812Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-29T22:55:50.677871Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T22:55:50.677904Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T22:55:50.683168Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T22:55:50.687606Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T22:55:50.688152Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T22:55:50.68858Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T22:55:50.688614Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T22:55:52.355048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T22:55:52.35527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T22:55:52.355312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T22:55:52.355344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T22:55:52.355359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-29T22:55:52.355383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-29T22:55:52.355403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-29T22:55:52.357873Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-905000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T22:55:52.358175Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T22:55:52.358505Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T22:55:52.358302Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T22:55:52.358327Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T22:55:52.363877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-29T22:55:52.364317Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:57:10 up 2 min,  0 users,  load average: 0.89, 0.46, 0.19
	Linux functional-905000 5.10.207 #1 SMP PREEMPT Tue Jul 23 01:19:38 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a5ecc8ed74fc] <==
	I0729 22:55:52.984659       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 22:55:52.984643       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 22:55:52.984746       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 22:55:52.985009       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 22:55:52.985687       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 22:55:52.985907       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 22:55:52.987152       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0729 22:55:52.987839       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 22:55:53.008922       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 22:55:53.885348       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 22:55:54.219154       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 22:55:54.223305       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 22:55:54.236588       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 22:55:54.244859       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 22:55:54.247082       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 22:56:05.132205       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 22:56:05.331206       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 22:56:13.158098       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.223.216"}
	I0729 22:56:18.240515       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 22:56:18.282757       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.203.61"}
	I0729 22:56:22.481187       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.165.122"}
	I0729 22:56:32.879643       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.211.33"}
	I0729 22:57:04.123482       1 controller.go:615] quota admission added evaluator for: namespaces
	I0729 22:57:04.238006       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.64.32"}
	I0729 22:57:04.245898       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.225.196"}
	
	
	==> kube-controller-manager [6d0a515b3b82] <==
	I0729 22:55:22.177686       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 22:55:22.178385       1 shared_informer.go:320] Caches are synced for GC
	I0729 22:55:22.183661       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 22:55:22.184453       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 22:55:22.184454       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 22:55:22.184725       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 22:55:22.190510       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 22:55:22.190524       1 shared_informer.go:320] Caches are synced for job
	I0729 22:55:22.191677       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 22:55:22.191682       1 shared_informer.go:320] Caches are synced for disruption
	I0729 22:55:22.191788       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 22:55:22.192608       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 22:55:22.193914       1 shared_informer.go:320] Caches are synced for HPA
	I0729 22:55:22.194482       1 shared_informer.go:320] Caches are synced for TTL
	I0729 22:55:22.289528       1 shared_informer.go:320] Caches are synced for namespace
	I0729 22:55:22.320900       1 shared_informer.go:320] Caches are synced for service account
	I0729 22:55:22.390322       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 22:55:22.391478       1 shared_informer.go:320] Caches are synced for taint
	I0729 22:55:22.391547       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 22:55:22.391602       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-905000"
	I0729 22:55:22.391675       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 22:55:22.399188       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 22:55:22.805649       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 22:55:22.880278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 22:55:22.880287       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [bbf8db85f466] <==
	I0729 22:56:49.090003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="34.584µs"
	I0729 22:56:51.659885       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="37.708µs"
	I0729 22:57:02.659942       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="26µs"
	I0729 22:57:04.154453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="12.415696ms"
	E0729 22:57:04.154506       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 22:57:04.156841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="7.967477ms"
	E0729 22:57:04.156857       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 22:57:04.156982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="2.459923ms"
	E0729 22:57:04.156994       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 22:57:04.162701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="2.885799ms"
	E0729 22:57:04.162769       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 22:57:04.162849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="4.468427ms"
	E0729 22:57:04.162861       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 22:57:04.166976       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="2.517756ms"
	E0729 22:57:04.167027       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0729 22:57:04.179773       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="5.083303ms"
	I0729 22:57:04.222290       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="39.283259ms"
	I0729 22:57:04.226833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="4.413427ms"
	I0729 22:57:04.227086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="47.293903ms"
	I0729 22:57:04.227567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="9.25µs"
	I0729 22:57:04.233939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="6.972892ms"
	I0729 22:57:04.234115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="49.708µs"
	I0729 22:57:06.203140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="26.125µs"
	I0729 22:57:10.233640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="4.484408ms"
	I0729 22:57:10.233757       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="24.75µs"
	
	
	==> kube-proxy [83355d37c718] <==
	I0729 22:55:54.217648       1 server_linux.go:69] "Using iptables proxy"
	I0729 22:55:54.224394       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0729 22:55:54.249367       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 22:55:54.249390       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 22:55:54.249400       1 server_linux.go:165] "Using iptables Proxier"
	I0729 22:55:54.250045       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 22:55:54.250118       1 server.go:872] "Version info" version="v1.30.3"
	I0729 22:55:54.250126       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 22:55:54.250508       1 config.go:192] "Starting service config controller"
	I0729 22:55:54.250520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 22:55:54.250530       1 config.go:101] "Starting endpoint slice config controller"
	I0729 22:55:54.250542       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 22:55:54.250796       1 config.go:319] "Starting node config controller"
	I0729 22:55:54.250802       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 22:55:54.351058       1 shared_informer.go:320] Caches are synced for node config
	I0729 22:55:54.351066       1 shared_informer.go:320] Caches are synced for service config
	I0729 22:55:54.351092       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e93fb8df87eb] <==
	I0729 22:55:10.930165       1 server_linux.go:69] "Using iptables proxy"
	I0729 22:55:10.933509       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0729 22:55:10.979786       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 22:55:10.979807       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 22:55:10.979817       1 server_linux.go:165] "Using iptables Proxier"
	I0729 22:55:10.980654       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 22:55:10.980728       1 server.go:872] "Version info" version="v1.30.3"
	I0729 22:55:10.980738       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 22:55:10.981295       1 config.go:192] "Starting service config controller"
	I0729 22:55:10.981299       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 22:55:10.981309       1 config.go:101] "Starting endpoint slice config controller"
	I0729 22:55:10.981311       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 22:55:10.981431       1 config.go:319] "Starting node config controller"
	I0729 22:55:10.981433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 22:55:11.082013       1 shared_informer.go:320] Caches are synced for service config
	I0729 22:55:11.082013       1 shared_informer.go:320] Caches are synced for node config
	I0729 22:55:11.082027       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [43fc8a2195dd] <==
	I0729 22:55:08.444638       1 serving.go:380] Generated self-signed cert in-memory
	W0729 22:55:09.799715       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 22:55:09.799810       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 22:55:09.799852       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 22:55:09.799869       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 22:55:09.804553       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 22:55:09.804563       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 22:55:09.805303       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 22:55:09.805301       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 22:55:09.807251       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 22:55:09.805311       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 22:55:09.907326       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 22:55:35.219757       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c8361af9c9c1] <==
	I0729 22:55:51.218931       1 serving.go:380] Generated self-signed cert in-memory
	W0729 22:55:52.904472       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 22:55:52.904491       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 22:55:52.904496       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 22:55:52.904499       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 22:55:52.941375       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 22:55:52.941387       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 22:55:52.942071       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 22:55:52.942111       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 22:55:52.942118       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 22:55:52.942125       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 22:55:53.043164       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 22:56:51 functional-905000 kubelet[6643]: I0729 22:56:51.659793    6643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.94904676 podStartE2EDuration="3.659778182s" podCreationTimestamp="2024-07-29 22:56:48 +0000 UTC" firstStartedPulling="2024-07-29 22:56:48.545245403 +0000 UTC m=+58.961886328" lastFinishedPulling="2024-07-29 22:56:49.255976825 +0000 UTC m=+59.672617750" observedRunningTime="2024-07-29 22:56:50.098904267 +0000 UTC m=+60.515545151" watchObservedRunningTime="2024-07-29 22:56:51.659778182 +0000 UTC m=+62.076419108"
	Jul 29 22:56:57 functional-905000 kubelet[6643]: I0729 22:56:57.450608    6643 topology_manager.go:215] "Topology Admit Handler" podUID="ffef8aab-e427-42bd-bbf4-5ed5f96ee354" podNamespace="default" podName="busybox-mount"
	Jul 29 22:56:57 functional-905000 kubelet[6643]: I0729 22:56:57.531470    6643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ffef8aab-e427-42bd-bbf4-5ed5f96ee354-test-volume\") pod \"busybox-mount\" (UID: \"ffef8aab-e427-42bd-bbf4-5ed5f96ee354\") " pod="default/busybox-mount"
	Jul 29 22:56:57 functional-905000 kubelet[6643]: I0729 22:56:57.531498    6643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn6mf\" (UniqueName: \"kubernetes.io/projected/ffef8aab-e427-42bd-bbf4-5ed5f96ee354-kube-api-access-kn6mf\") pod \"busybox-mount\" (UID: \"ffef8aab-e427-42bd-bbf4-5ed5f96ee354\") " pod="default/busybox-mount"
	Jul 29 22:57:01 functional-905000 kubelet[6643]: I0729 22:57:01.353745    6643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ffef8aab-e427-42bd-bbf4-5ed5f96ee354-test-volume\") pod \"ffef8aab-e427-42bd-bbf4-5ed5f96ee354\" (UID: \"ffef8aab-e427-42bd-bbf4-5ed5f96ee354\") "
	Jul 29 22:57:01 functional-905000 kubelet[6643]: I0729 22:57:01.353769    6643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn6mf\" (UniqueName: \"kubernetes.io/projected/ffef8aab-e427-42bd-bbf4-5ed5f96ee354-kube-api-access-kn6mf\") pod \"ffef8aab-e427-42bd-bbf4-5ed5f96ee354\" (UID: \"ffef8aab-e427-42bd-bbf4-5ed5f96ee354\") "
	Jul 29 22:57:01 functional-905000 kubelet[6643]: I0729 22:57:01.353964    6643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffef8aab-e427-42bd-bbf4-5ed5f96ee354-test-volume" (OuterVolumeSpecName: "test-volume") pod "ffef8aab-e427-42bd-bbf4-5ed5f96ee354" (UID: "ffef8aab-e427-42bd-bbf4-5ed5f96ee354"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 29 22:57:01 functional-905000 kubelet[6643]: I0729 22:57:01.354879    6643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffef8aab-e427-42bd-bbf4-5ed5f96ee354-kube-api-access-kn6mf" (OuterVolumeSpecName: "kube-api-access-kn6mf") pod "ffef8aab-e427-42bd-bbf4-5ed5f96ee354" (UID: "ffef8aab-e427-42bd-bbf4-5ed5f96ee354"). InnerVolumeSpecName "kube-api-access-kn6mf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 22:57:01 functional-905000 kubelet[6643]: I0729 22:57:01.453870    6643 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kn6mf\" (UniqueName: \"kubernetes.io/projected/ffef8aab-e427-42bd-bbf4-5ed5f96ee354-kube-api-access-kn6mf\") on node \"functional-905000\" DevicePath \"\""
	Jul 29 22:57:01 functional-905000 kubelet[6643]: I0729 22:57:01.453891    6643 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ffef8aab-e427-42bd-bbf4-5ed5f96ee354-test-volume\") on node \"functional-905000\" DevicePath \"\""
	Jul 29 22:57:02 functional-905000 kubelet[6643]: I0729 22:57:02.166579    6643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb0895784fd6cc25b1290b47e2a472ad9e3986b2c0d8de51917ee199c5564336"
	Jul 29 22:57:02 functional-905000 kubelet[6643]: I0729 22:57:02.654899    6643 scope.go:117] "RemoveContainer" containerID="6f4be0fe11050ed8ba8fe4837c39410997570aafcfc502596a83c5878ba02099"
	Jul 29 22:57:02 functional-905000 kubelet[6643]: E0729 22:57:02.655005    6643 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-c76m8_default(ebee093f-b68e-49b6-9c5a-b4adedc1c159)\"" pod="default/hello-node-connect-6f49f58cd5-c76m8" podUID="ebee093f-b68e-49b6-9c5a-b4adedc1c159"
	Jul 29 22:57:04 functional-905000 kubelet[6643]: I0729 22:57:04.186018    6643 topology_manager.go:215] "Topology Admit Handler" podUID="8bd4bf7c-7feb-4233-bcfb-fd25b6191a88" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-8cqxf"
	Jul 29 22:57:04 functional-905000 kubelet[6643]: E0729 22:57:04.186075    6643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ffef8aab-e427-42bd-bbf4-5ed5f96ee354" containerName="mount-munger"
	Jul 29 22:57:04 functional-905000 kubelet[6643]: I0729 22:57:04.186094    6643 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffef8aab-e427-42bd-bbf4-5ed5f96ee354" containerName="mount-munger"
	Jul 29 22:57:04 functional-905000 kubelet[6643]: I0729 22:57:04.221811    6643 topology_manager.go:215] "Topology Admit Handler" podUID="2b86547a-5dd9-472a-aa58-e0d8699f0143" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-c5qv2"
	Jul 29 22:57:04 functional-905000 kubelet[6643]: I0729 22:57:04.273001    6643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8bd4bf7c-7feb-4233-bcfb-fd25b6191a88-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-8cqxf\" (UID: \"8bd4bf7c-7feb-4233-bcfb-fd25b6191a88\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-8cqxf"
	Jul 29 22:57:04 functional-905000 kubelet[6643]: I0729 22:57:04.273037    6643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpncf\" (UniqueName: \"kubernetes.io/projected/8bd4bf7c-7feb-4233-bcfb-fd25b6191a88-kube-api-access-xpncf\") pod \"kubernetes-dashboard-779776cb65-8cqxf\" (UID: \"8bd4bf7c-7feb-4233-bcfb-fd25b6191a88\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-8cqxf"
	Jul 29 22:57:04 functional-905000 kubelet[6643]: I0729 22:57:04.373671    6643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2b86547a-5dd9-472a-aa58-e0d8699f0143-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-c5qv2\" (UID: \"2b86547a-5dd9-472a-aa58-e0d8699f0143\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-c5qv2"
	Jul 29 22:57:04 functional-905000 kubelet[6643]: I0729 22:57:04.373690    6643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9dfq\" (UniqueName: \"kubernetes.io/projected/2b86547a-5dd9-472a-aa58-e0d8699f0143-kube-api-access-m9dfq\") pod \"dashboard-metrics-scraper-b5fc48f67-c5qv2\" (UID: \"2b86547a-5dd9-472a-aa58-e0d8699f0143\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-c5qv2"
	Jul 29 22:57:05 functional-905000 kubelet[6643]: I0729 22:57:05.655222    6643 scope.go:117] "RemoveContainer" containerID="86cfbbeed58d2b10331832172a1ad89fb5b005dfb20a66f6c6e7fc247e1aa322"
	Jul 29 22:57:06 functional-905000 kubelet[6643]: I0729 22:57:06.196493    6643 scope.go:117] "RemoveContainer" containerID="86cfbbeed58d2b10331832172a1ad89fb5b005dfb20a66f6c6e7fc247e1aa322"
	Jul 29 22:57:06 functional-905000 kubelet[6643]: I0729 22:57:06.196639    6643 scope.go:117] "RemoveContainer" containerID="7daa6e1260f5b99641c50d8bb7119372b8eafa21b5b56ce56204e3d26b0ae10e"
	Jul 29 22:57:06 functional-905000 kubelet[6643]: E0729 22:57:06.196720    6643 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-kq27q_default(68d6f59d-b414-4c34-8d94-10b93ac540b4)\"" pod="default/hello-node-65f5d5cc78-kq27q" podUID="68d6f59d-b414-4c34-8d94-10b93ac540b4"
	
	
	==> kubernetes-dashboard [fff35173a29e] <==
	2024/07/29 22:57:09 Starting overwatch
	2024/07/29 22:57:09 Using namespace: kubernetes-dashboard
	2024/07/29 22:57:09 Using in-cluster config to connect to apiserver
	2024/07/29 22:57:09 Using secret token for csrf signing
	2024/07/29 22:57:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/29 22:57:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/29 22:57:09 Successful initial request to the apiserver, version: v1.30.3
	2024/07/29 22:57:09 Generating JWE encryption key
	2024/07/29 22:57:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/29 22:57:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/29 22:57:09 Initializing JWE encryption key from synchronized object
	2024/07/29 22:57:09 Creating in-cluster Sidecar client
	2024/07/29 22:57:09 Serving insecurely on HTTP port: 9090
	2024/07/29 22:57:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b2b6e4e1974a] <==
	I0729 22:55:10.971838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 22:55:10.975095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 22:55:10.975109       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 22:55:28.364262       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 22:55:28.364476       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"29819c2a-e6a8-4914-852f-f1db020454de", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-905000_5980a0cf-b1f8-4b70-b200-c1c25196a260 became leader
	I0729 22:55:28.365166       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-905000_5980a0cf-b1f8-4b70-b200-c1c25196a260!
	I0729 22:55:28.465771       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-905000_5980a0cf-b1f8-4b70-b200-c1c25196a260!
	
	
	==> storage-provisioner [c52a98f4092b] <==
	I0729 22:55:54.168458       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 22:55:54.185839       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 22:55:54.185864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 22:56:11.571572       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 22:56:11.571669       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-905000_3e7d8851-6173-4252-baf8-68f5dd64bb0f!
	I0729 22:56:11.571967       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"29819c2a-e6a8-4914-852f-f1db020454de", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-905000_3e7d8851-6173-4252-baf8-68f5dd64bb0f became leader
	I0729 22:56:11.672612       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-905000_3e7d8851-6173-4252-baf8-68f5dd64bb0f!
	I0729 22:56:34.852757       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0729 22:56:34.852791       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    3327b12c-6dfc-42a1-9167-a89f1ed359c1 350 0 2024-07-29 22:54:49 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-29 22:54:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-9e95a5fe-0320-4f05-be80-14a5bb5c918b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  9e95a5fe-0320-4f05-be80-14a5bb5c918b 717 0 2024-07-29 22:56:34 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-29 22:56:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-29 22:56:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0729 22:56:34.853160       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-9e95a5fe-0320-4f05-be80-14a5bb5c918b" provisioned
	I0729 22:56:34.853174       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0729 22:56:34.853177       1 volume_store.go:212] Trying to save persistentvolume "pvc-9e95a5fe-0320-4f05-be80-14a5bb5c918b"
	I0729 22:56:34.853473       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9e95a5fe-0320-4f05-be80-14a5bb5c918b", APIVersion:"v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0729 22:56:34.856703       1 volume_store.go:219] persistentvolume "pvc-9e95a5fe-0320-4f05-be80-14a5bb5c918b" saved
	I0729 22:56:34.856997       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9e95a5fe-0320-4f05-be80-14a5bb5c918b", APIVersion:"v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-9e95a5fe-0320-4f05-be80-14a5bb5c918b
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-905000 -n functional-905000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-905000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-c5qv2
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-905000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-c5qv2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-905000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-c5qv2: exit status 1 (41.732833ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-905000/192.168.105.4
	Start Time:       Mon, 29 Jul 2024 15:56:57 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://b1f938f3487f4e3f3879176f7e26ac61e447d1f34561dd67f8d00b29ee9827a3
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Jul 2024 15:56:59 -0700
	      Finished:     Mon, 29 Jul 2024 15:56:59 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kn6mf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kn6mf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-905000
	  Normal  Pulling    13s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.225s (1.225s including waiting). Image size: 3547125 bytes.
	  Normal  Created    11s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-c5qv2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-905000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-c5qv2: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (38.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-365000 node stop m02 -v=7 --alsologtostderr: (12.184394959s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
E0729 16:01:59.231072    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:02:40.193068    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:04:02.114737    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:04:39.641912    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (3m45.047515583s)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-365000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:01:52.782464    3264 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:01:52.782638    3264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:01:52.782641    3264 out.go:304] Setting ErrFile to fd 2...
	I0729 16:01:52.782644    3264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:01:52.782782    3264 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:01:52.782917    3264 out.go:298] Setting JSON to false
	I0729 16:01:52.782934    3264 mustload.go:65] Loading cluster: ha-365000
	I0729 16:01:52.782986    3264 notify.go:220] Checking for updates...
	I0729 16:01:52.783173    3264 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:01:52.783181    3264 status.go:255] checking status of ha-365000 ...
	I0729 16:01:52.783993    3264 status.go:330] ha-365000 host status = "Running" (err=<nil>)
	I0729 16:01:52.784003    3264 host.go:66] Checking if "ha-365000" exists ...
	I0729 16:01:52.784090    3264 host.go:66] Checking if "ha-365000" exists ...
	I0729 16:01:52.784201    3264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:01:52.784210    3264 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000/id_rsa Username:docker}
	W0729 16:03:07.785375    3264 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0729 16:03:07.785471    3264 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 16:03:07.785481    3264 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 16:03:07.785494    3264 status.go:257] ha-365000 status: &{Name:ha-365000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:03:07.785511    3264 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 16:03:07.785516    3264 status.go:255] checking status of ha-365000-m02 ...
	I0729 16:03:07.785756    3264 status.go:330] ha-365000-m02 host status = "Stopped" (err=<nil>)
	I0729 16:03:07.785762    3264 status.go:343] host is not running, skipping remaining checks
	I0729 16:03:07.785764    3264 status.go:257] ha-365000-m02 status: &{Name:ha-365000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:03:07.785768    3264 status.go:255] checking status of ha-365000-m03 ...
	I0729 16:03:07.786394    3264 status.go:330] ha-365000-m03 host status = "Running" (err=<nil>)
	I0729 16:03:07.786401    3264 host.go:66] Checking if "ha-365000-m03" exists ...
	I0729 16:03:07.786515    3264 host.go:66] Checking if "ha-365000-m03" exists ...
	I0729 16:03:07.786637    3264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:03:07.786650    3264 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m03/id_rsa Username:docker}
	W0729 16:04:22.788110    3264 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0729 16:04:22.788164    3264 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0729 16:04:22.788173    3264 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 16:04:22.788177    3264 status.go:257] ha-365000-m03 status: &{Name:ha-365000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:04:22.788185    3264 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 16:04:22.788190    3264 status.go:255] checking status of ha-365000-m04 ...
	I0729 16:04:22.788932    3264 status.go:330] ha-365000-m04 host status = "Running" (err=<nil>)
	I0729 16:04:22.788942    3264 host.go:66] Checking if "ha-365000-m04" exists ...
	I0729 16:04:22.789051    3264 host.go:66] Checking if "ha-365000-m04" exists ...
	I0729 16:04:22.789181    3264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:04:22.789193    3264 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m04/id_rsa Username:docker}
	W0729 16:05:37.791478    3264 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0729 16:05:37.791673    3264 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0729 16:05:37.791717    3264 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0729 16:05:37.791737    3264 status.go:257] ha-365000-m04 status: &{Name:ha-365000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:05:37.791783    3264 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr": ha-365000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-365000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-365000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-365000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr": ha-365000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-365000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-365000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-365000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr": ha-365000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-365000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-365000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-365000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
E0729 16:06:18.253852    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:06:45.955020    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 3 (1m15.069755375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 16:06:52.861705    3334 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 16:06:52.861745    3334 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.101516041s)
ha_test.go:413: expected profile "ha-365000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
E0729 16:09:39.639954    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 3 (1m15.042000041s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 16:10:38.004808    3406 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 16:10:38.004858    3406 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.150381625s)

                                                
                                                
-- stdout --
	* Starting "ha-365000-m02" control-plane node in "ha-365000" cluster
	* Restarting existing qemu2 VM for "ha-365000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-365000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:10:38.079634    3423 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:10:38.079960    3423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:10:38.079965    3423 out.go:304] Setting ErrFile to fd 2...
	I0729 16:10:38.079968    3423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:10:38.080173    3423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:10:38.080498    3423 mustload.go:65] Loading cluster: ha-365000
	I0729 16:10:38.080805    3423 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 16:10:38.081134    3423 host.go:58] "ha-365000-m02" host status: Stopped
	I0729 16:10:38.084694    3423 out.go:177] * Starting "ha-365000-m02" control-plane node in "ha-365000" cluster
	I0729 16:10:38.088580    3423 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:10:38.088606    3423 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:10:38.088625    3423 cache.go:56] Caching tarball of preloaded images
	I0729 16:10:38.088735    3423 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:10:38.088743    3423 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:10:38.088828    3423 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/ha-365000/config.json ...
	I0729 16:10:38.089277    3423 start.go:360] acquireMachinesLock for ha-365000-m02: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:10:38.089333    3423 start.go:364] duration metric: took 39.75µs to acquireMachinesLock for "ha-365000-m02"
	I0729 16:10:38.089347    3423 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:10:38.089353    3423 fix.go:54] fixHost starting: m02
	I0729 16:10:38.089542    3423 fix.go:112] recreateIfNeeded on ha-365000-m02: state=Stopped err=<nil>
	W0729 16:10:38.089553    3423 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:10:38.092528    3423 out.go:177] * Restarting existing qemu2 VM for "ha-365000-m02" ...
	I0729 16:10:38.096482    3423 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:10:38.096538    3423 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:2f:7d:bd:70:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/disk.qcow2
	I0729 16:10:38.099930    3423 main.go:141] libmachine: STDOUT: 
	I0729 16:10:38.099956    3423 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:10:38.099989    3423 fix.go:56] duration metric: took 10.63475ms for fixHost
	I0729 16:10:38.099993    3423 start.go:83] releasing machines lock for "ha-365000-m02", held for 10.654125ms
	W0729 16:10:38.100000    3423 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:10:38.100040    3423 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:10:38.100046    3423 start.go:729] Will try again in 5 seconds ...
	I0729 16:10:43.102158    3423 start.go:360] acquireMachinesLock for ha-365000-m02: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:10:43.102671    3423 start.go:364] duration metric: took 430.917µs to acquireMachinesLock for "ha-365000-m02"
	I0729 16:10:43.102819    3423 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:10:43.102841    3423 fix.go:54] fixHost starting: m02
	I0729 16:10:43.103637    3423 fix.go:112] recreateIfNeeded on ha-365000-m02: state=Stopped err=<nil>
	W0729 16:10:43.103663    3423 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:10:43.118072    3423 out.go:177] * Restarting existing qemu2 VM for "ha-365000-m02" ...
	I0729 16:10:43.122346    3423 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:10:43.122550    3423 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:2f:7d:bd:70:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/disk.qcow2
	I0729 16:10:43.131569    3423 main.go:141] libmachine: STDOUT: 
	I0729 16:10:43.131637    3423 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:10:43.131722    3423 fix.go:56] duration metric: took 28.882709ms for fixHost
	I0729 16:10:43.131739    3423 start.go:83] releasing machines lock for "ha-365000-m02", held for 29.045792ms
	W0729 16:10:43.131951    3423 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:10:43.136347    3423 out.go:177] 
	W0729 16:10:43.140354    3423 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:10:43.140379    3423 out.go:239] * 
	* 
	W0729 16:10:43.148828    3423 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:10:43.153354    3423 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0729 16:10:38.079634    3423 out.go:291] Setting OutFile to fd 1 ...
I0729 16:10:38.079960    3423 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:10:38.079965    3423 out.go:304] Setting ErrFile to fd 2...
I0729 16:10:38.079968    3423 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:10:38.080173    3423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
I0729 16:10:38.080498    3423 mustload.go:65] Loading cluster: ha-365000
I0729 16:10:38.080805    3423 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0729 16:10:38.081134    3423 host.go:58] "ha-365000-m02" host status: Stopped
I0729 16:10:38.084694    3423 out.go:177] * Starting "ha-365000-m02" control-plane node in "ha-365000" cluster
I0729 16:10:38.088580    3423 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 16:10:38.088606    3423 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 16:10:38.088625    3423 cache.go:56] Caching tarball of preloaded images
I0729 16:10:38.088735    3423 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 16:10:38.088743    3423 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 16:10:38.088828    3423 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/ha-365000/config.json ...
I0729 16:10:38.089277    3423 start.go:360] acquireMachinesLock for ha-365000-m02: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 16:10:38.089333    3423 start.go:364] duration metric: took 39.75µs to acquireMachinesLock for "ha-365000-m02"
I0729 16:10:38.089347    3423 start.go:96] Skipping create...Using existing machine configuration
I0729 16:10:38.089353    3423 fix.go:54] fixHost starting: m02
I0729 16:10:38.089542    3423 fix.go:112] recreateIfNeeded on ha-365000-m02: state=Stopped err=<nil>
W0729 16:10:38.089553    3423 fix.go:138] unexpected machine state, will restart: <nil>
I0729 16:10:38.092528    3423 out.go:177] * Restarting existing qemu2 VM for "ha-365000-m02" ...
I0729 16:10:38.096482    3423 qemu.go:418] Using hvf for hardware acceleration
I0729 16:10:38.096538    3423 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:2f:7d:bd:70:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/disk.qcow2
I0729 16:10:38.099930    3423 main.go:141] libmachine: STDOUT: 
I0729 16:10:38.099956    3423 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 16:10:38.099989    3423 fix.go:56] duration metric: took 10.63475ms for fixHost
I0729 16:10:38.099993    3423 start.go:83] releasing machines lock for "ha-365000-m02", held for 10.654125ms
W0729 16:10:38.100000    3423 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 16:10:38.100040    3423 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 16:10:38.100046    3423 start.go:729] Will try again in 5 seconds ...
I0729 16:10:43.102158    3423 start.go:360] acquireMachinesLock for ha-365000-m02: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 16:10:43.102671    3423 start.go:364] duration metric: took 430.917µs to acquireMachinesLock for "ha-365000-m02"
I0729 16:10:43.102819    3423 start.go:96] Skipping create...Using existing machine configuration
I0729 16:10:43.102841    3423 fix.go:54] fixHost starting: m02
I0729 16:10:43.103637    3423 fix.go:112] recreateIfNeeded on ha-365000-m02: state=Stopped err=<nil>
W0729 16:10:43.103663    3423 fix.go:138] unexpected machine state, will restart: <nil>
I0729 16:10:43.118072    3423 out.go:177] * Restarting existing qemu2 VM for "ha-365000-m02" ...
I0729 16:10:43.122346    3423 qemu.go:418] Using hvf for hardware acceleration
I0729 16:10:43.122550    3423 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:2f:7d:bd:70:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/disk.qcow2
I0729 16:10:43.131569    3423 main.go:141] libmachine: STDOUT: 
I0729 16:10:43.131637    3423 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 16:10:43.131722    3423 fix.go:56] duration metric: took 28.882709ms for fixHost
I0729 16:10:43.131739    3423 start.go:83] releasing machines lock for "ha-365000-m02", held for 29.045792ms
W0729 16:10:43.131951    3423 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 16:10:43.136347    3423 out.go:177] 
W0729 16:10:43.140354    3423 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 16:10:43.140379    3423 out.go:239] * 
* 
W0729 16:10:43.148828    3423 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 16:10:43.153354    3423 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-365000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
E0729 16:11:02.706274    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 16:11:18.251663    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (3m45.0826045s)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-365000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:10:43.221195    3427 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:10:43.221397    3427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:10:43.221401    3427 out.go:304] Setting ErrFile to fd 2...
	I0729 16:10:43.221404    3427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:10:43.221566    3427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:10:43.221718    3427 out.go:298] Setting JSON to false
	I0729 16:10:43.221731    3427 mustload.go:65] Loading cluster: ha-365000
	I0729 16:10:43.221778    3427 notify.go:220] Checking for updates...
	I0729 16:10:43.222011    3427 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:10:43.222021    3427 status.go:255] checking status of ha-365000 ...
	I0729 16:10:43.222833    3427 status.go:330] ha-365000 host status = "Running" (err=<nil>)
	I0729 16:10:43.222848    3427 host.go:66] Checking if "ha-365000" exists ...
	I0729 16:10:43.222969    3427 host.go:66] Checking if "ha-365000" exists ...
	I0729 16:10:43.223114    3427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:10:43.223124    3427 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000/id_rsa Username:docker}
	W0729 16:11:58.225325    3427 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0729 16:11:58.225570    3427 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 16:11:58.225604    3427 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 16:11:58.225623    3427 status.go:257] ha-365000 status: &{Name:ha-365000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:11:58.225667    3427 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 16:11:58.225685    3427 status.go:255] checking status of ha-365000-m02 ...
	I0729 16:11:58.226533    3427 status.go:330] ha-365000-m02 host status = "Stopped" (err=<nil>)
	I0729 16:11:58.226549    3427 status.go:343] host is not running, skipping remaining checks
	I0729 16:11:58.226558    3427 status.go:257] ha-365000-m02 status: &{Name:ha-365000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:11:58.226574    3427 status.go:255] checking status of ha-365000-m03 ...
	I0729 16:11:58.228505    3427 status.go:330] ha-365000-m03 host status = "Running" (err=<nil>)
	I0729 16:11:58.228525    3427 host.go:66] Checking if "ha-365000-m03" exists ...
	I0729 16:11:58.228972    3427 host.go:66] Checking if "ha-365000-m03" exists ...
	I0729 16:11:58.229443    3427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:11:58.229471    3427 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m03/id_rsa Username:docker}
	W0729 16:13:13.231698    3427 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0729 16:13:13.231937    3427 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0729 16:13:13.231974    3427 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 16:13:13.231992    3427 status.go:257] ha-365000-m03 status: &{Name:ha-365000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:13:13.232035    3427 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 16:13:13.232053    3427 status.go:255] checking status of ha-365000-m04 ...
	I0729 16:13:13.234624    3427 status.go:330] ha-365000-m04 host status = "Running" (err=<nil>)
	I0729 16:13:13.234649    3427 host.go:66] Checking if "ha-365000-m04" exists ...
	I0729 16:13:13.235136    3427 host.go:66] Checking if "ha-365000-m04" exists ...
	I0729 16:13:13.235556    3427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 16:13:13.235584    3427 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m04/id_rsa Username:docker}
	W0729 16:14:28.237718    3427 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0729 16:14:28.237963    3427 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0729 16:14:28.238022    3427 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0729 16:14:28.238042    3427 status.go:257] ha-365000-m04 status: &{Name:ha-365000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 16:14:28.238086    3427 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
E0729 16:14:39.637998    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 3 (1m15.071124375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 16:15:43.309145    3492 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 16:15:43.309207    3492 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-365000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-365000 -v=7 --alsologtostderr
E0729 16:19:39.636117    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 16:21:18.248032    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-365000 -v=7 --alsologtostderr: (5m27.168270084s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-365000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-365000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.233482416s)

                                                
                                                
-- stdout --
	* [ha-365000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-365000" primary control-plane node in "ha-365000" cluster
	* Restarting existing qemu2 VM for "ha-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:23:40.716679    3608 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:23:40.716875    3608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:23:40.716880    3608 out.go:304] Setting ErrFile to fd 2...
	I0729 16:23:40.716883    3608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:23:40.717071    3608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:23:40.718351    3608 out.go:298] Setting JSON to false
	I0729 16:23:40.739693    3608 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3187,"bootTime":1722292233,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:23:40.739765    3608 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:23:40.743628    3608 out.go:177] * [ha-365000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:23:40.751654    3608 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:23:40.751719    3608 notify.go:220] Checking for updates...
	I0729 16:23:40.759609    3608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:23:40.763626    3608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:23:40.766648    3608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:23:40.769651    3608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:23:40.772556    3608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:23:40.775935    3608 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:23:40.775992    3608 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:23:40.779611    3608 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:23:40.786610    3608 start.go:297] selected driver: qemu2
	I0729 16:23:40.786619    3608 start.go:901] validating driver "qemu2" against &{Name:ha-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-365000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:23:40.786710    3608 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:23:40.789710    3608 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:23:40.789764    3608 cni.go:84] Creating CNI manager for ""
	I0729 16:23:40.789769    3608 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 16:23:40.789823    3608 start.go:340] cluster config:
	{Name:ha-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-365000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:23:40.794483    3608 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:23:40.801623    3608 out.go:177] * Starting "ha-365000" primary control-plane node in "ha-365000" cluster
	I0729 16:23:40.805608    3608 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:23:40.805627    3608 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:23:40.805641    3608 cache.go:56] Caching tarball of preloaded images
	I0729 16:23:40.805695    3608 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:23:40.805700    3608 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:23:40.805764    3608 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/ha-365000/config.json ...
	I0729 16:23:40.806224    3608 start.go:360] acquireMachinesLock for ha-365000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:23:40.806265    3608 start.go:364] duration metric: took 34.458µs to acquireMachinesLock for "ha-365000"
	I0729 16:23:40.806275    3608 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:23:40.806280    3608 fix.go:54] fixHost starting: 
	I0729 16:23:40.806406    3608 fix.go:112] recreateIfNeeded on ha-365000: state=Stopped err=<nil>
	W0729 16:23:40.806414    3608 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:23:40.809624    3608 out.go:177] * Restarting existing qemu2 VM for "ha-365000" ...
	I0729 16:23:40.816531    3608 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:23:40.816571    3608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:32:90:df:bc:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000/disk.qcow2
	I0729 16:23:40.818611    3608 main.go:141] libmachine: STDOUT: 
	I0729 16:23:40.818627    3608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:23:40.818655    3608 fix.go:56] duration metric: took 12.3745ms for fixHost
	I0729 16:23:40.818660    3608 start.go:83] releasing machines lock for "ha-365000", held for 12.390791ms
	W0729 16:23:40.818666    3608 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:23:40.818692    3608 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:23:40.818696    3608 start.go:729] Will try again in 5 seconds ...
	I0729 16:23:45.820910    3608 start.go:360] acquireMachinesLock for ha-365000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:23:45.821375    3608 start.go:364] duration metric: took 325.417µs to acquireMachinesLock for "ha-365000"
	I0729 16:23:45.821513    3608 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:23:45.821534    3608 fix.go:54] fixHost starting: 
	I0729 16:23:45.822247    3608 fix.go:112] recreateIfNeeded on ha-365000: state=Stopped err=<nil>
	W0729 16:23:45.822274    3608 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:23:45.829679    3608 out.go:177] * Restarting existing qemu2 VM for "ha-365000" ...
	I0729 16:23:45.832694    3608 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:23:45.832906    3608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:32:90:df:bc:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000/disk.qcow2
	I0729 16:23:45.842458    3608 main.go:141] libmachine: STDOUT: 
	I0729 16:23:45.842530    3608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:23:45.842636    3608 fix.go:56] duration metric: took 21.101416ms for fixHost
	I0729 16:23:45.842657    3608 start.go:83] releasing machines lock for "ha-365000", held for 21.258ms
	W0729 16:23:45.842879    3608 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:23:45.851660    3608 out.go:177] 
	W0729 16:23:45.854624    3608 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:23:45.854680    3608 out.go:239] * 
	* 
	W0729 16:23:45.857104    3608 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:23:45.865661    3608 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-365000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-365000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (34.256167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.95325ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-365000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-365000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:23:46.010448    3624 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:23:46.010686    3624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:23:46.010690    3624 out.go:304] Setting ErrFile to fd 2...
	I0729 16:23:46.010692    3624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:23:46.010823    3624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:23:46.011068    3624 mustload.go:65] Loading cluster: ha-365000
	I0729 16:23:46.011289    3624 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 16:23:46.011598    3624 out.go:239] ! The control-plane node ha-365000 host is not running (will try others): state=Stopped
	! The control-plane node ha-365000 host is not running (will try others): state=Stopped
	W0729 16:23:46.011708    3624 out.go:239] ! The control-plane node ha-365000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-365000-m02 host is not running (will try others): state=Stopped
	I0729 16:23:46.015796    3624 out.go:177] * The control-plane node ha-365000-m03 host is not running: state=Stopped
	I0729 16:23:46.018614    3624 out.go:177]   To start a cluster, run: "minikube start -p ha-365000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-365000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: exit status 7 (29.386042ms)

                                                
                                                
-- stdout --
	ha-365000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-365000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-365000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-365000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:23:46.049870    3626 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:23:46.050047    3626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:23:46.050050    3626 out.go:304] Setting ErrFile to fd 2...
	I0729 16:23:46.050053    3626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:23:46.050185    3626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:23:46.050298    3626 out.go:298] Setting JSON to false
	I0729 16:23:46.050307    3626 mustload.go:65] Loading cluster: ha-365000
	I0729 16:23:46.050357    3626 notify.go:220] Checking for updates...
	I0729 16:23:46.050558    3626 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:23:46.050567    3626 status.go:255] checking status of ha-365000 ...
	I0729 16:23:46.050769    3626 status.go:330] ha-365000 host status = "Stopped" (err=<nil>)
	I0729 16:23:46.050773    3626 status.go:343] host is not running, skipping remaining checks
	I0729 16:23:46.050775    3626 status.go:257] ha-365000 status: &{Name:ha-365000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:23:46.050785    3626 status.go:255] checking status of ha-365000-m02 ...
	I0729 16:23:46.050871    3626 status.go:330] ha-365000-m02 host status = "Stopped" (err=<nil>)
	I0729 16:23:46.050874    3626 status.go:343] host is not running, skipping remaining checks
	I0729 16:23:46.050876    3626 status.go:257] ha-365000-m02 status: &{Name:ha-365000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:23:46.050880    3626 status.go:255] checking status of ha-365000-m03 ...
	I0729 16:23:46.050969    3626 status.go:330] ha-365000-m03 host status = "Stopped" (err=<nil>)
	I0729 16:23:46.050972    3626 status.go:343] host is not running, skipping remaining checks
	I0729 16:23:46.050974    3626 status.go:257] ha-365000-m03 status: &{Name:ha-365000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 16:23:46.050978    3626 status.go:255] checking status of ha-365000-m04 ...
	I0729 16:23:46.051071    3626 status.go:330] ha-365000-m04 host status = "Stopped" (err=<nil>)
	I0729 16:23:46.051074    3626 status.go:343] host is not running, skipping remaining checks
	I0729 16:23:46.051076    3626 status.go:257] ha-365000-m04 status: &{Name:ha-365000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (29.099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-365000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-365000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-365000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-365000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (29.154834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (207.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 stop -v=7 --alsologtostderr
E0729 16:24:39.634573    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 16:26:18.246355    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 stop -v=7 --alsologtostderr: signal: killed (3m27.247454791s)

                                                
                                                
-- stdout --
	* Stopping node "ha-365000-m04"  ...
	* Stopping node "ha-365000-m03"  ...
	* Stopping node "ha-365000-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:23:46.185999    3635 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:23:46.186166    3635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:23:46.186169    3635 out.go:304] Setting ErrFile to fd 2...
	I0729 16:23:46.186171    3635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:23:46.186349    3635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:23:46.186569    3635 out.go:298] Setting JSON to false
	I0729 16:23:46.186671    3635 mustload.go:65] Loading cluster: ha-365000
	I0729 16:23:46.186870    3635 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:23:46.186931    3635 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/ha-365000/config.json ...
	I0729 16:23:46.187179    3635 mustload.go:65] Loading cluster: ha-365000
	I0729 16:23:46.187262    3635 config.go:182] Loaded profile config "ha-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:23:46.187278    3635 stop.go:39] StopHost: ha-365000-m04
	I0729 16:23:46.190792    3635 out.go:177] * Stopping node "ha-365000-m04"  ...
	I0729 16:23:46.198670    3635 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 16:23:46.198711    3635 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 16:23:46.198719    3635 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m04/id_rsa Username:docker}
	W0729 16:25:01.200907    3635 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0729 16:25:01.201276    3635 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0729 16:25:01.201414    3635 main.go:141] libmachine: Stopping "ha-365000-m04"...
	I0729 16:25:01.201589    3635 stop.go:66] stop err: Machine "ha-365000-m04" is already stopped.
	I0729 16:25:01.201618    3635 stop.go:69] host is already stopped
	I0729 16:25:01.201646    3635 stop.go:39] StopHost: ha-365000-m03
	I0729 16:25:01.206961    3635 out.go:177] * Stopping node "ha-365000-m03"  ...
	I0729 16:25:01.215938    3635 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 16:25:01.216103    3635 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 16:25:01.216134    3635 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m03/id_rsa Username:docker}
	W0729 16:26:16.217986    3635 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0729 16:26:16.218191    3635 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 16:26:16.218260    3635 main.go:141] libmachine: Stopping "ha-365000-m03"...
	I0729 16:26:16.218400    3635 stop.go:66] stop err: Machine "ha-365000-m03" is already stopped.
	I0729 16:26:16.218430    3635 stop.go:69] host is already stopped
	I0729 16:26:16.218459    3635 stop.go:39] StopHost: ha-365000-m02
	I0729 16:26:16.226116    3635 out.go:177] * Stopping node "ha-365000-m02"  ...
	I0729 16:26:16.230072    3635 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 16:26:16.230228    3635 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 16:26:16.230264    3635 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/ha-365000-m02/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-365000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr: context deadline exceeded (2.708µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-365000 -n ha-365000: exit status 7 (71.509084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (207.32s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-287000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-287000 --driver=qemu2 : exit status 80 (10.17423775s)

                                                
                                                
-- stdout --
	* [image-287000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-287000" primary control-plane node in "image-287000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-287000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-287000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-287000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-287000 -n image-287000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-287000 -n image-287000: exit status 7 (47.292917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-287000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-613000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-613000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.721563667s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47208730-8a70-49e0-a85d-e4f0feaa9785","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-613000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"501bdde0-b2fd-4f45-b697-1659e706402f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19348"}}
	{"specversion":"1.0","id":"fab8e834-7141-4f38-9a71-fc58b601dc87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig"}}
	{"specversion":"1.0","id":"8c959bd7-efec-4126-ab42-be2b24c5c094","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9a5a0fb4-05cd-44b9-bf97-b34760aa0399","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e5629780-4795-425d-981e-5e7ac8903778","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube"}}
	{"specversion":"1.0","id":"88cf7200-63f9-4dbb-9ced-8cb4d086b44c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4e33f32f-7a7e-4300-9825-b5142e8da116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0687fcd-ecfe-445b-896b-d19766f9c785","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4e06ba81-d868-4abf-be35-21520d86d839","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-613000\" primary control-plane node in \"json-output-613000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"16528040-9ad0-47e0-b017-c5d103a2c905","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"1df724b7-c5e4-41a5-afd9-8b5259d9a404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-613000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"eea5b3b1-d9c6-4abe-a88c-533bfa817073","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"93160a53-ad12-4912-9ebc-871cbd679b95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"ad1886d6-11af-4f88-8e47-2bf26ec99741","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-613000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"2298b1be-cbb0-40a6-8f8a-41beee6c8e59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"beaaab0c-2a3e-468c-a55a-4a7e5d2f0276","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-613000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-613000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-613000 --output=json --user=testUser: exit status 83 (77.827166ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"44fa92c2-daaf-4cbb-be18-84ecc814d446","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-613000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"110ecf6b-689d-48dc-95ea-1baa3d09d88e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-613000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-613000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-613000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-613000 --output=json --user=testUser: exit status 83 (43.250542ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-613000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-613000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-613000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-613000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-242000 --driver=qemu2 
E0729 16:27:42.621924    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-242000 --driver=qemu2 : exit status 80 (9.756632792s)

                                                
                                                
-- stdout --
	* [first-242000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-242000" primary control-plane node in "first-242000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-242000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-242000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-242000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 16:27:47.526094 -0700 PDT m=+2479.067265209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-243000 -n second-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-243000 -n second-243000: exit status 85 (76.164042ms)

                                                
                                                
-- stdout --
	* Profile "second-243000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-243000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-243000" host is not running, skipping log retrieval (state="* Profile \"second-243000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-243000\"")
helpers_test.go:175: Cleaning up "second-243000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-243000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 16:27:47.710188 -0700 PDT m=+2479.251364292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-242000 -n first-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-242000 -n first-242000: exit status 7 (28.983417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-242000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-242000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-242000
--- FAIL: TestMinikubeProfile (10.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-322000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-322000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.918964458s)

                                                
                                                
-- stdout --
	* [mount-start-1-322000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-322000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-322000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-322000 -n mount-start-1-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-322000 -n mount-start-1-322000: exit status 7 (68.420667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-971000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-971000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.815658541s)

                                                
                                                
-- stdout --
	* [multinode-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-971000" primary control-plane node in "multinode-971000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-971000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:27:58.011841    3982 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:27:58.011989    3982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:27:58.011993    3982 out.go:304] Setting ErrFile to fd 2...
	I0729 16:27:58.011995    3982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:27:58.012116    3982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:27:58.013221    3982 out.go:298] Setting JSON to false
	I0729 16:27:58.029166    3982 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3445,"bootTime":1722292233,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:27:58.029277    3982 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:27:58.035084    3982 out.go:177] * [multinode-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:27:58.043047    3982 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:27:58.043096    3982 notify.go:220] Checking for updates...
	I0729 16:27:58.052023    3982 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:27:58.055104    3982 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:27:58.058099    3982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:27:58.061031    3982 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:27:58.064120    3982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:27:58.067239    3982 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:27:58.070985    3982 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:27:58.078071    3982 start.go:297] selected driver: qemu2
	I0729 16:27:58.078081    3982 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:27:58.078090    3982 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:27:58.080323    3982 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:27:58.083927    3982 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:27:58.087147    3982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:27:58.087172    3982 cni.go:84] Creating CNI manager for ""
	I0729 16:27:58.087176    3982 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 16:27:58.087181    3982 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 16:27:58.087218    3982 start.go:340] cluster config:
	{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:27:58.090836    3982 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:27:58.099025    3982 out.go:177] * Starting "multinode-971000" primary control-plane node in "multinode-971000" cluster
	I0729 16:27:58.103029    3982 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:27:58.103044    3982 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:27:58.103052    3982 cache.go:56] Caching tarball of preloaded images
	I0729 16:27:58.103108    3982 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:27:58.103114    3982 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:27:58.103339    3982 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/multinode-971000/config.json ...
	I0729 16:27:58.103351    3982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/multinode-971000/config.json: {Name:mk6877648baf6d924ce55a6598be6ecbcb54d0bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:27:58.103567    3982 start.go:360] acquireMachinesLock for multinode-971000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:27:58.103607    3982 start.go:364] duration metric: took 33.458µs to acquireMachinesLock for "multinode-971000"
	I0729 16:27:58.103620    3982 start.go:93] Provisioning new machine with config: &{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:27:58.103650    3982 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:27:58.112062    3982 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:27:58.130348    3982 start.go:159] libmachine.API.Create for "multinode-971000" (driver="qemu2")
	I0729 16:27:58.130372    3982 client.go:168] LocalClient.Create starting
	I0729 16:27:58.130446    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:27:58.130480    3982 main.go:141] libmachine: Decoding PEM data...
	I0729 16:27:58.130489    3982 main.go:141] libmachine: Parsing certificate...
	I0729 16:27:58.130531    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:27:58.130558    3982 main.go:141] libmachine: Decoding PEM data...
	I0729 16:27:58.130567    3982 main.go:141] libmachine: Parsing certificate...
	I0729 16:27:58.130924    3982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:27:58.273835    3982 main.go:141] libmachine: Creating SSH key...
	I0729 16:27:58.327449    3982 main.go:141] libmachine: Creating Disk image...
	I0729 16:27:58.327455    3982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:27:58.327630    3982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2
	I0729 16:27:58.336735    3982 main.go:141] libmachine: STDOUT: 
	I0729 16:27:58.336753    3982 main.go:141] libmachine: STDERR: 
	I0729 16:27:58.336806    3982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2 +20000M
	I0729 16:27:58.344463    3982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:27:58.344479    3982 main.go:141] libmachine: STDERR: 
	I0729 16:27:58.344490    3982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2
	I0729 16:27:58.344493    3982 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:27:58.344506    3982 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:27:58.344538    3982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c3:ef:3e:14:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2
	I0729 16:27:58.346180    3982 main.go:141] libmachine: STDOUT: 
	I0729 16:27:58.346195    3982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:27:58.346215    3982 client.go:171] duration metric: took 215.845917ms to LocalClient.Create
	I0729 16:28:00.348340    3982 start.go:128] duration metric: took 2.244732292s to createHost
	I0729 16:28:00.348396    3982 start.go:83] releasing machines lock for "multinode-971000", held for 2.244847208s
	W0729 16:28:00.348473    3982 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:28:00.366892    3982 out.go:177] * Deleting "multinode-971000" in qemu2 ...
	W0729 16:28:00.393665    3982 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:28:00.393694    3982 start.go:729] Will try again in 5 seconds ...
	I0729 16:28:05.395777    3982 start.go:360] acquireMachinesLock for multinode-971000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:28:05.396300    3982 start.go:364] duration metric: took 412.958µs to acquireMachinesLock for "multinode-971000"
	I0729 16:28:05.396451    3982 start.go:93] Provisioning new machine with config: &{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:28:05.396754    3982 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:28:05.411277    3982 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:28:05.461947    3982 start.go:159] libmachine.API.Create for "multinode-971000" (driver="qemu2")
	I0729 16:28:05.461997    3982 client.go:168] LocalClient.Create starting
	I0729 16:28:05.462128    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:28:05.462190    3982 main.go:141] libmachine: Decoding PEM data...
	I0729 16:28:05.462206    3982 main.go:141] libmachine: Parsing certificate...
	I0729 16:28:05.462272    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:28:05.462316    3982 main.go:141] libmachine: Decoding PEM data...
	I0729 16:28:05.462326    3982 main.go:141] libmachine: Parsing certificate...
	I0729 16:28:05.463120    3982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:28:05.615163    3982 main.go:141] libmachine: Creating SSH key...
	I0729 16:28:05.737051    3982 main.go:141] libmachine: Creating Disk image...
	I0729 16:28:05.737060    3982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:28:05.737231    3982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2
	I0729 16:28:05.746215    3982 main.go:141] libmachine: STDOUT: 
	I0729 16:28:05.746232    3982 main.go:141] libmachine: STDERR: 
	I0729 16:28:05.746279    3982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2 +20000M
	I0729 16:28:05.753982    3982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:28:05.753997    3982 main.go:141] libmachine: STDERR: 
	I0729 16:28:05.754007    3982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2
	I0729 16:28:05.754012    3982 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:28:05.754026    3982 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:28:05.754053    3982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:a2:1b:62:d8:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2
	I0729 16:28:05.755650    3982 main.go:141] libmachine: STDOUT: 
	I0729 16:28:05.755675    3982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:28:05.755689    3982 client.go:171] duration metric: took 293.695583ms to LocalClient.Create
	I0729 16:28:07.757811    3982 start.go:128] duration metric: took 2.361100667s to createHost
	I0729 16:28:07.757951    3982 start.go:83] releasing machines lock for "multinode-971000", held for 2.361614042s
	W0729 16:28:07.758394    3982 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:28:07.769847    3982 out.go:177] 
	W0729 16:28:07.774741    3982 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:28:07.774782    3982 out.go:239] * 
	* 
	W0729 16:28:07.777479    3982 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:28:07.786862    3982 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-971000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (67.590042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (91.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (128.955375ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-971000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- rollout status deployment/busybox: exit status 1 (57.279709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.472917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.805792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.160708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.942792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.44875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.2075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.468375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.1315ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.361209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
E0729 16:29:39.548653    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.127292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.770375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.892209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.221042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.8615ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (29.656333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (91.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.480542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (28.349958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-971000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-971000 -v 3 --alsologtostderr: exit status 83 (43.989959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-971000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:29:39.925453    4067 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:29:39.925587    4067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:39.925591    4067 out.go:304] Setting ErrFile to fd 2...
	I0729 16:29:39.925593    4067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:39.925734    4067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:29:39.925971    4067 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:29:39.926146    4067 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:29:39.931002    4067 out.go:177] * The control-plane node multinode-971000 host is not running: state=Stopped
	I0729 16:29:39.936049    4067 out.go:177]   To start a cluster, run: "minikube start -p multinode-971000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-971000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (29.597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-971000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-971000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.948584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-971000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-971000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-971000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (28.831583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-971000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-971000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-971000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-971000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (28.179375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status --output json --alsologtostderr: exit status 7 (29.999791ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-971000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:29:40.131188    4079 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:29:40.131572    4079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:40.131576    4079 out.go:304] Setting ErrFile to fd 2...
	I0729 16:29:40.131579    4079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:40.131766    4079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:29:40.131916    4079 out.go:298] Setting JSON to true
	I0729 16:29:40.131925    4079 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:29:40.132050    4079 notify.go:220] Checking for updates...
	I0729 16:29:40.132367    4079 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:29:40.132385    4079 status.go:255] checking status of multinode-971000 ...
	I0729 16:29:40.132591    4079 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:29:40.132595    4079 status.go:343] host is not running, skipping remaining checks
	I0729 16:29:40.132598    4079 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-971000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (28.653125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 node stop m03: exit status 85 (45.13625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-971000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status: exit status 7 (29.705417ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status --alsologtostderr: exit status 7 (29.506708ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:29:40.265557    4087 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:29:40.265714    4087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:40.265718    4087 out.go:304] Setting ErrFile to fd 2...
	I0729 16:29:40.265720    4087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:40.265864    4087 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:29:40.265988    4087 out.go:298] Setting JSON to false
	I0729 16:29:40.266003    4087 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:29:40.266059    4087 notify.go:220] Checking for updates...
	I0729 16:29:40.266212    4087 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:29:40.266219    4087 status.go:255] checking status of multinode-971000 ...
	I0729 16:29:40.266431    4087 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:29:40.266435    4087 status.go:343] host is not running, skipping remaining checks
	I0729 16:29:40.266437    4087 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-971000 status --alsologtostderr": multinode-971000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (29.029083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (57.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.10375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:29:40.323742    4091 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:29:40.323972    4091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:40.323975    4091 out.go:304] Setting ErrFile to fd 2...
	I0729 16:29:40.323977    4091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:40.324096    4091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:29:40.324350    4091 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:29:40.324523    4091 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:29:40.328995    4091 out.go:177] 
	W0729 16:29:40.332001    4091 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 16:29:40.332005    4091 out.go:239] * 
	* 
	W0729 16:29:40.333576    4091 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:29:40.336956    4091 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 16:29:40.323742    4091 out.go:291] Setting OutFile to fd 1 ...
I0729 16:29:40.323972    4091 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:29:40.323975    4091 out.go:304] Setting ErrFile to fd 2...
I0729 16:29:40.323977    4091 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 16:29:40.324096    4091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
I0729 16:29:40.324350    4091 mustload.go:65] Loading cluster: multinode-971000
I0729 16:29:40.324523    4091 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 16:29:40.328995    4091 out.go:177] 
W0729 16:29:40.332001    4091 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 16:29:40.332005    4091 out.go:239] * 
* 
W0729 16:29:40.333576    4091 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 16:29:40.336956    4091 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-971000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr: exit status 7 (29.299625ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:29:40.369548    4093 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:29:40.369698    4093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:40.369701    4093 out.go:304] Setting ErrFile to fd 2...
	I0729 16:29:40.369703    4093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:40.369825    4093 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:29:40.369956    4093 out.go:298] Setting JSON to false
	I0729 16:29:40.369968    4093 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:29:40.370028    4093 notify.go:220] Checking for updates...
	I0729 16:29:40.370154    4093 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:29:40.370162    4093 status.go:255] checking status of multinode-971000 ...
	I0729 16:29:40.370368    4093 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:29:40.370371    4093 status.go:343] host is not running, skipping remaining checks
	I0729 16:29:40.370373    4093 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr: exit status 7 (71.660042ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:29:41.771716    4095 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:29:41.771968    4095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:41.771973    4095 out.go:304] Setting ErrFile to fd 2...
	I0729 16:29:41.771976    4095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:41.772154    4095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:29:41.772318    4095 out.go:298] Setting JSON to false
	I0729 16:29:41.772330    4095 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:29:41.772364    4095 notify.go:220] Checking for updates...
	I0729 16:29:41.772587    4095 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:29:41.772597    4095 status.go:255] checking status of multinode-971000 ...
	I0729 16:29:41.772918    4095 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:29:41.772923    4095 status.go:343] host is not running, skipping remaining checks
	I0729 16:29:41.772926    4095 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr: exit status 7 (72.286666ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:29:43.021498    4097 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:29:43.021691    4097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:43.021695    4097 out.go:304] Setting ErrFile to fd 2...
	I0729 16:29:43.021699    4097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:43.021865    4097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:29:43.022028    4097 out.go:298] Setting JSON to false
	I0729 16:29:43.022045    4097 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:29:43.022081    4097 notify.go:220] Checking for updates...
	I0729 16:29:43.022305    4097 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:29:43.022314    4097 status.go:255] checking status of multinode-971000 ...
	I0729 16:29:43.022605    4097 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:29:43.022610    4097 status.go:343] host is not running, skipping remaining checks
	I0729 16:29:43.022613    4097 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr: exit status 7 (71.825833ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:29:46.136318    4102 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:29:46.136534    4102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:46.136538    4102 out.go:304] Setting ErrFile to fd 2...
	I0729 16:29:46.136541    4102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:46.136704    4102 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:29:46.136871    4102 out.go:298] Setting JSON to false
	I0729 16:29:46.136883    4102 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:29:46.136923    4102 notify.go:220] Checking for updates...
	I0729 16:29:46.137128    4102 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:29:46.137136    4102 status.go:255] checking status of multinode-971000 ...
	I0729 16:29:46.137415    4102 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:29:46.137421    4102 status.go:343] host is not running, skipping remaining checks
	I0729 16:29:46.137424    4102 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr: exit status 7 (73.371208ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:29:50.812965    4104 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:29:50.813160    4104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:50.813164    4104 out.go:304] Setting ErrFile to fd 2...
	I0729 16:29:50.813168    4104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:50.813331    4104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:29:50.813502    4104 out.go:298] Setting JSON to false
	I0729 16:29:50.813514    4104 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:29:50.813549    4104 notify.go:220] Checking for updates...
	I0729 16:29:50.813790    4104 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:29:50.813799    4104 status.go:255] checking status of multinode-971000 ...
	I0729 16:29:50.814060    4104 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:29:50.814065    4104 status.go:343] host is not running, skipping remaining checks
	I0729 16:29:50.814068    4104 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr: exit status 7 (73.346458ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:29:57.107533    4106 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:29:57.107728    4106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:57.107732    4106 out.go:304] Setting ErrFile to fd 2...
	I0729 16:29:57.107735    4106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:29:57.107940    4106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:29:57.108091    4106 out.go:298] Setting JSON to false
	I0729 16:29:57.108104    4106 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:29:57.108143    4106 notify.go:220] Checking for updates...
	I0729 16:29:57.108350    4106 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:29:57.108358    4106 status.go:255] checking status of multinode-971000 ...
	I0729 16:29:57.108655    4106 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:29:57.108660    4106 status.go:343] host is not running, skipping remaining checks
	I0729 16:29:57.108663    4106 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr: exit status 7 (72.426375ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:30:05.324557    4366 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:30:05.324757    4366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:05.324762    4366 out.go:304] Setting ErrFile to fd 2...
	I0729 16:30:05.324765    4366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:05.324960    4366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:30:05.325109    4366 out.go:298] Setting JSON to false
	I0729 16:30:05.325122    4366 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:30:05.325165    4366 notify.go:220] Checking for updates...
	I0729 16:30:05.325384    4366 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:30:05.325393    4366 status.go:255] checking status of multinode-971000 ...
	I0729 16:30:05.325665    4366 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:30:05.325670    4366 status.go:343] host is not running, skipping remaining checks
	I0729 16:30:05.325677    4366 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr: exit status 7 (74.166917ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:30:12.545336    4368 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:30:12.545601    4368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:12.545606    4368 out.go:304] Setting ErrFile to fd 2...
	I0729 16:30:12.545609    4368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:12.545810    4368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:30:12.545995    4368 out.go:298] Setting JSON to false
	I0729 16:30:12.546010    4368 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:30:12.546051    4368 notify.go:220] Checking for updates...
	I0729 16:30:12.546252    4368 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:30:12.546263    4368 status.go:255] checking status of multinode-971000 ...
	I0729 16:30:12.546543    4368 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:30:12.546549    4368 status.go:343] host is not running, skipping remaining checks
	I0729 16:30:12.546552    4368 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr: exit status 7 (71.985375ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:30:38.085220    4383 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:30:38.085465    4383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:38.085470    4383 out.go:304] Setting ErrFile to fd 2...
	I0729 16:30:38.085473    4383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:38.085653    4383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:30:38.085839    4383 out.go:298] Setting JSON to false
	I0729 16:30:38.085853    4383 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:30:38.085902    4383 notify.go:220] Checking for updates...
	I0729 16:30:38.086151    4383 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:30:38.086161    4383 status.go:255] checking status of multinode-971000 ...
	I0729 16:30:38.086467    4383 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:30:38.086472    4383 status.go:343] host is not running, skipping remaining checks
	I0729 16:30:38.086474    4383 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-971000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (32.527083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (57.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-971000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-971000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-971000: (4.006156458s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-971000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-971000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.226283042s)

                                                
                                                
-- stdout --
	* [multinode-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-971000" primary control-plane node in "multinode-971000" cluster
	* Restarting existing qemu2 VM for "multinode-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:30:42.221861    4410 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:30:42.222066    4410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:42.222071    4410 out.go:304] Setting ErrFile to fd 2...
	I0729 16:30:42.222075    4410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:42.222248    4410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:30:42.223573    4410 out.go:298] Setting JSON to false
	I0729 16:30:42.243836    4410 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3609,"bootTime":1722292233,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:30:42.243910    4410 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:30:42.246218    4410 out.go:177] * [multinode-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:30:42.254523    4410 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:30:42.254573    4410 notify.go:220] Checking for updates...
	I0729 16:30:42.261522    4410 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:30:42.264523    4410 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:30:42.267495    4410 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:30:42.270486    4410 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:30:42.273497    4410 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:30:42.276664    4410 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:30:42.276719    4410 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:30:42.281428    4410 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:30:42.287466    4410 start.go:297] selected driver: qemu2
	I0729 16:30:42.287475    4410 start.go:901] validating driver "qemu2" against &{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:30:42.287544    4410 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:30:42.290151    4410 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:30:42.290191    4410 cni.go:84] Creating CNI manager for ""
	I0729 16:30:42.290197    4410 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 16:30:42.290247    4410 start.go:340] cluster config:
	{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-971000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:30:42.294144    4410 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:30:42.302495    4410 out.go:177] * Starting "multinode-971000" primary control-plane node in "multinode-971000" cluster
	I0729 16:30:42.306431    4410 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:30:42.306445    4410 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:30:42.306457    4410 cache.go:56] Caching tarball of preloaded images
	I0729 16:30:42.306513    4410 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:30:42.306519    4410 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:30:42.306575    4410 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/multinode-971000/config.json ...
	I0729 16:30:42.307022    4410 start.go:360] acquireMachinesLock for multinode-971000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:30:42.307062    4410 start.go:364] duration metric: took 33.875µs to acquireMachinesLock for "multinode-971000"
	I0729 16:30:42.307073    4410 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:30:42.307080    4410 fix.go:54] fixHost starting: 
	I0729 16:30:42.307212    4410 fix.go:112] recreateIfNeeded on multinode-971000: state=Stopped err=<nil>
	W0729 16:30:42.307221    4410 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:30:42.311487    4410 out.go:177] * Restarting existing qemu2 VM for "multinode-971000" ...
	I0729 16:30:42.319463    4410 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:30:42.319510    4410 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:a2:1b:62:d8:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2
	I0729 16:30:42.321732    4410 main.go:141] libmachine: STDOUT: 
	I0729 16:30:42.321750    4410 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:30:42.321780    4410 fix.go:56] duration metric: took 14.700292ms for fixHost
	I0729 16:30:42.321786    4410 start.go:83] releasing machines lock for "multinode-971000", held for 14.718834ms
	W0729 16:30:42.321792    4410 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:30:42.321821    4410 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:30:42.321826    4410 start.go:729] Will try again in 5 seconds ...
	I0729 16:30:47.323953    4410 start.go:360] acquireMachinesLock for multinode-971000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:30:47.324409    4410 start.go:364] duration metric: took 343.458µs to acquireMachinesLock for "multinode-971000"
	I0729 16:30:47.324525    4410 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:30:47.324545    4410 fix.go:54] fixHost starting: 
	I0729 16:30:47.325259    4410 fix.go:112] recreateIfNeeded on multinode-971000: state=Stopped err=<nil>
	W0729 16:30:47.325286    4410 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:30:47.329836    4410 out.go:177] * Restarting existing qemu2 VM for "multinode-971000" ...
	I0729 16:30:47.335799    4410 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:30:47.336031    4410 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:a2:1b:62:d8:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2
	I0729 16:30:47.345715    4410 main.go:141] libmachine: STDOUT: 
	I0729 16:30:47.345800    4410 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:30:47.345905    4410 fix.go:56] duration metric: took 21.360125ms for fixHost
	I0729 16:30:47.345931    4410 start.go:83] releasing machines lock for "multinode-971000", held for 21.498542ms
	W0729 16:30:47.346113    4410 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:30:47.354715    4410 out.go:177] 
	W0729 16:30:47.358820    4410 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:30:47.358850    4410 out.go:239] * 
	* 
	W0729 16:30:47.361351    4410 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:30:47.369603    4410 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-971000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-971000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (31.870083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 node delete m03: exit status 83 (39.536166ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-971000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-971000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status --alsologtostderr: exit status 7 (28.495041ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:30:47.553512    4425 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:30:47.553674    4425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:47.553677    4425 out.go:304] Setting ErrFile to fd 2...
	I0729 16:30:47.553680    4425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:47.553797    4425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:30:47.553934    4425 out.go:298] Setting JSON to false
	I0729 16:30:47.553943    4425 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:30:47.554014    4425 notify.go:220] Checking for updates...
	I0729 16:30:47.554123    4425 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:30:47.554129    4425 status.go:255] checking status of multinode-971000 ...
	I0729 16:30:47.554367    4425 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:30:47.554371    4425 status.go:343] host is not running, skipping remaining checks
	I0729 16:30:47.554373    4425 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-971000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (29.048625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-971000 stop: (3.501988708s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status: exit status 7 (65.335791ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-971000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-971000 status --alsologtostderr: exit status 7 (31.823458ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:30:51.182306    4451 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:30:51.182443    4451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:51.182446    4451 out.go:304] Setting ErrFile to fd 2...
	I0729 16:30:51.182449    4451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:51.182587    4451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:30:51.182699    4451 out.go:298] Setting JSON to false
	I0729 16:30:51.182708    4451 mustload.go:65] Loading cluster: multinode-971000
	I0729 16:30:51.182759    4451 notify.go:220] Checking for updates...
	I0729 16:30:51.182906    4451 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:30:51.182912    4451 status.go:255] checking status of multinode-971000 ...
	I0729 16:30:51.183099    4451 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I0729 16:30:51.183103    4451 status.go:343] host is not running, skipping remaining checks
	I0729 16:30:51.183105    4451 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-971000 status --alsologtostderr": multinode-971000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-971000 status --alsologtostderr": multinode-971000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (28.620792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-971000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-971000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.184459s)

                                                
                                                
-- stdout --
	* [multinode-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-971000" primary control-plane node in "multinode-971000" cluster
	* Restarting existing qemu2 VM for "multinode-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:30:51.240267    4455 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:30:51.240415    4455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:51.240418    4455 out.go:304] Setting ErrFile to fd 2...
	I0729 16:30:51.240421    4455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:30:51.240540    4455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:30:51.241547    4455 out.go:298] Setting JSON to false
	I0729 16:30:51.257510    4455 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3618,"bootTime":1722292233,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:30:51.257574    4455 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:30:51.262902    4455 out.go:177] * [multinode-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:30:51.270908    4455 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:30:51.270958    4455 notify.go:220] Checking for updates...
	I0729 16:30:51.278765    4455 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:30:51.282819    4455 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:30:51.285873    4455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:30:51.288799    4455 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:30:51.291812    4455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:30:51.295109    4455 config.go:182] Loaded profile config "multinode-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:30:51.295376    4455 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:30:51.299829    4455 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:30:51.306848    4455 start.go:297] selected driver: qemu2
	I0729 16:30:51.306858    4455 start.go:901] validating driver "qemu2" against &{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:30:51.306949    4455 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:30:51.309183    4455 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:30:51.309222    4455 cni.go:84] Creating CNI manager for ""
	I0729 16:30:51.309226    4455 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 16:30:51.309270    4455 start.go:340] cluster config:
	{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-971000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:30:51.312803    4455 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:30:51.320853    4455 out.go:177] * Starting "multinode-971000" primary control-plane node in "multinode-971000" cluster
	I0729 16:30:51.323850    4455 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:30:51.323875    4455 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:30:51.323889    4455 cache.go:56] Caching tarball of preloaded images
	I0729 16:30:51.323957    4455 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:30:51.323963    4455 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:30:51.324027    4455 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/multinode-971000/config.json ...
	I0729 16:30:51.324479    4455 start.go:360] acquireMachinesLock for multinode-971000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:30:51.324517    4455 start.go:364] duration metric: took 31.042µs to acquireMachinesLock for "multinode-971000"
	I0729 16:30:51.324527    4455 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:30:51.324532    4455 fix.go:54] fixHost starting: 
	I0729 16:30:51.324662    4455 fix.go:112] recreateIfNeeded on multinode-971000: state=Stopped err=<nil>
	W0729 16:30:51.324670    4455 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:30:51.332701    4455 out.go:177] * Restarting existing qemu2 VM for "multinode-971000" ...
	I0729 16:30:51.336847    4455 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:30:51.336885    4455 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:a2:1b:62:d8:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2
	I0729 16:30:51.339006    4455 main.go:141] libmachine: STDOUT: 
	I0729 16:30:51.339028    4455 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:30:51.339059    4455 fix.go:56] duration metric: took 14.526625ms for fixHost
	I0729 16:30:51.339063    4455 start.go:83] releasing machines lock for "multinode-971000", held for 14.542292ms
	W0729 16:30:51.339070    4455 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:30:51.339109    4455 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:30:51.339115    4455 start.go:729] Will try again in 5 seconds ...
	I0729 16:30:56.341184    4455 start.go:360] acquireMachinesLock for multinode-971000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:30:56.341738    4455 start.go:364] duration metric: took 449.625µs to acquireMachinesLock for "multinode-971000"
	I0729 16:30:56.341973    4455 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:30:56.341993    4455 fix.go:54] fixHost starting: 
	I0729 16:30:56.342753    4455 fix.go:112] recreateIfNeeded on multinode-971000: state=Stopped err=<nil>
	W0729 16:30:56.342781    4455 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:30:56.348337    4455 out.go:177] * Restarting existing qemu2 VM for "multinode-971000" ...
	I0729 16:30:56.353286    4455 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:30:56.353522    4455 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:a2:1b:62:d8:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/multinode-971000/disk.qcow2
	I0729 16:30:56.362620    4455 main.go:141] libmachine: STDOUT: 
	I0729 16:30:56.362676    4455 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:30:56.362774    4455 fix.go:56] duration metric: took 20.784167ms for fixHost
	I0729 16:30:56.362789    4455 start.go:83] releasing machines lock for "multinode-971000", held for 20.927208ms
	W0729 16:30:56.362954    4455 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:30:56.370325    4455 out.go:177] 
	W0729 16:30:56.373223    4455 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:30:56.373247    4455 out.go:239] * 
	* 
	W0729 16:30:56.375766    4455 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:30:56.385246    4455 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-971000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (69.424084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-971000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-971000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-971000-m01 --driver=qemu2 : exit status 80 (9.919967083s)

                                                
                                                
-- stdout --
	* [multinode-971000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-971000-m01" primary control-plane node in "multinode-971000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-971000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-971000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-971000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-971000-m02 --driver=qemu2 : exit status 80 (10.392237584s)

                                                
                                                
-- stdout --
	* [multinode-971000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-971000-m02" primary control-plane node in "multinode-971000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-971000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-971000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-971000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-971000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-971000: exit status 83 (78.565292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-971000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-971000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-971000 -n multinode-971000: exit status 7 (29.77875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.54s)

                                                
                                    
x
+
TestPreload (9.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-416000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0729 16:31:18.158592    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-416000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.775787958s)

                                                
                                                
-- stdout --
	* [test-preload-416000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-416000" primary control-plane node in "test-preload-416000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-416000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:31:17.136870    4519 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:31:17.137003    4519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:31:17.137006    4519 out.go:304] Setting ErrFile to fd 2...
	I0729 16:31:17.137008    4519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:31:17.137142    4519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:31:17.138227    4519 out.go:298] Setting JSON to false
	I0729 16:31:17.154140    4519 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3644,"bootTime":1722292233,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:31:17.154203    4519 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:31:17.160550    4519 out.go:177] * [test-preload-416000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:31:17.171544    4519 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:31:17.171580    4519 notify.go:220] Checking for updates...
	I0729 16:31:17.178397    4519 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:31:17.181458    4519 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:31:17.185471    4519 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:31:17.188472    4519 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:31:17.191469    4519 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:31:17.194748    4519 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:31:17.194800    4519 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:31:17.198496    4519 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:31:17.205502    4519 start.go:297] selected driver: qemu2
	I0729 16:31:17.205510    4519 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:31:17.205522    4519 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:31:17.207831    4519 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:31:17.211444    4519 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:31:17.214526    4519 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:31:17.214563    4519 cni.go:84] Creating CNI manager for ""
	I0729 16:31:17.214573    4519 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:31:17.214577    4519 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:31:17.214597    4519 start.go:340] cluster config:
	{Name:test-preload-416000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:31:17.218289    4519 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:31:17.225532    4519 out.go:177] * Starting "test-preload-416000" primary control-plane node in "test-preload-416000" cluster
	I0729 16:31:17.229464    4519 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0729 16:31:17.229531    4519 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/test-preload-416000/config.json ...
	I0729 16:31:17.229545    4519 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/test-preload-416000/config.json: {Name:mk834fb5b6d868520796b4e634e86a63f261c9a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:31:17.229545    4519 cache.go:107] acquiring lock: {Name:mk398b2a2c30354278149aa4f8fa41608d46d5dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:31:17.229553    4519 cache.go:107] acquiring lock: {Name:mk3c4942b49f80c896b6dbb03e275e5f236d5862 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:31:17.229560    4519 cache.go:107] acquiring lock: {Name:mk3cc3a6dbbe1de706589c1caabb067be1c6b94b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:31:17.229762    4519 cache.go:107] acquiring lock: {Name:mkdd4a5455c1e0a01d3d4444ac22ee3541e56ba0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:31:17.229812    4519 cache.go:107] acquiring lock: {Name:mkd675ae83f5a60ef503c5d6c46a72d7c0d524fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:31:17.229850    4519 cache.go:107] acquiring lock: {Name:mke09783bfb9ac462aaa821843be9491d1dfd320 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:31:17.229876    4519 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 16:31:17.229888    4519 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 16:31:17.229899    4519 start.go:360] acquireMachinesLock for test-preload-416000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:31:17.229908    4519 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:31:17.229914    4519 cache.go:107] acquiring lock: {Name:mk8e1523fe24469a4227cd2b87ec1c1bdede0fd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:31:17.229898    4519 cache.go:107] acquiring lock: {Name:mk36c32b9b71f2224dc7f54f8cad018fbd266015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:31:17.230024    4519 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 16:31:17.230031    4519 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 16:31:17.230045    4519 start.go:364] duration metric: took 138.5µs to acquireMachinesLock for "test-preload-416000"
	I0729 16:31:17.230063    4519 start.go:93] Provisioning new machine with config: &{Name:test-preload-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:31:17.230112    4519 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:31:17.230200    4519 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 16:31:17.230215    4519 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:31:17.230220    4519 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:31:17.233506    4519 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:31:17.242448    4519 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 16:31:17.242585    4519 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:31:17.243250    4519 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 16:31:17.244489    4519 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 16:31:17.245168    4519 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:31:17.245405    4519 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 16:31:17.245434    4519 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 16:31:17.245462    4519 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:31:17.251098    4519 start.go:159] libmachine.API.Create for "test-preload-416000" (driver="qemu2")
	I0729 16:31:17.251115    4519 client.go:168] LocalClient.Create starting
	I0729 16:31:17.251189    4519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:31:17.251221    4519 main.go:141] libmachine: Decoding PEM data...
	I0729 16:31:17.251230    4519 main.go:141] libmachine: Parsing certificate...
	I0729 16:31:17.251266    4519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:31:17.251289    4519 main.go:141] libmachine: Decoding PEM data...
	I0729 16:31:17.251296    4519 main.go:141] libmachine: Parsing certificate...
	I0729 16:31:17.251632    4519 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:31:17.393963    4519 main.go:141] libmachine: Creating SSH key...
	I0729 16:31:17.493484    4519 main.go:141] libmachine: Creating Disk image...
	I0729 16:31:17.493503    4519 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:31:17.493725    4519 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2
	I0729 16:31:17.503985    4519 main.go:141] libmachine: STDOUT: 
	I0729 16:31:17.504000    4519 main.go:141] libmachine: STDERR: 
	I0729 16:31:17.504049    4519 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2 +20000M
	I0729 16:31:17.512929    4519 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:31:17.512950    4519 main.go:141] libmachine: STDERR: 
	I0729 16:31:17.512966    4519 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2
	I0729 16:31:17.512970    4519 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:31:17.512980    4519 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:31:17.513012    4519 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:b4:82:c5:78:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2
	I0729 16:31:17.515412    4519 main.go:141] libmachine: STDOUT: 
	I0729 16:31:17.515426    4519 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:31:17.515447    4519 client.go:171] duration metric: took 264.336291ms to LocalClient.Create
	I0729 16:31:17.787487    4519 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 16:31:17.798648    4519 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 16:31:17.836282    4519 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 16:31:17.852829    4519 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 16:31:17.882133    4519 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0729 16:31:17.900491    4519 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 16:31:17.900527    4519 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 16:31:17.947743    4519 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 16:31:17.990819    4519 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0729 16:31:17.990860    4519 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 760.963084ms
	I0729 16:31:17.990896    4519 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0729 16:31:18.126164    4519 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 16:31:18.126243    4519 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 16:31:18.430715    4519 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 16:31:18.430793    4519 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.201280458s
	I0729 16:31:18.430817    4519 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 16:31:19.515605    4519 start.go:128] duration metric: took 2.285538083s to createHost
	I0729 16:31:19.515651    4519 start.go:83] releasing machines lock for "test-preload-416000", held for 2.28566575s
	W0729 16:31:19.515701    4519 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:31:19.533091    4519 out.go:177] * Deleting "test-preload-416000" in qemu2 ...
	W0729 16:31:19.561642    4519 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:31:19.561674    4519 start.go:729] Will try again in 5 seconds ...
	I0729 16:31:19.645247    4519 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0729 16:31:19.645294    4519 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.415615583s
	I0729 16:31:19.645318    4519 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0729 16:31:20.797086    4519 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0729 16:31:20.797148    4519 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.567702708s
	I0729 16:31:20.797220    4519 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0729 16:31:21.313330    4519 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0729 16:31:21.313373    4519 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.083774166s
	I0729 16:31:21.313400    4519 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0729 16:31:21.991939    4519 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0729 16:31:21.991991    4519 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.762399667s
	I0729 16:31:21.992058    4519 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0729 16:31:23.280739    4519 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0729 16:31:23.280822    4519 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.051450042s
	I0729 16:31:23.280890    4519 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0729 16:31:24.561774    4519 start.go:360] acquireMachinesLock for test-preload-416000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:31:24.562182    4519 start.go:364] duration metric: took 335.75µs to acquireMachinesLock for "test-preload-416000"
	I0729 16:31:24.562304    4519 start.go:93] Provisioning new machine with config: &{Name:test-preload-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:31:24.562557    4519 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:31:24.569162    4519 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:31:24.621214    4519 start.go:159] libmachine.API.Create for "test-preload-416000" (driver="qemu2")
	I0729 16:31:24.621277    4519 client.go:168] LocalClient.Create starting
	I0729 16:31:24.621393    4519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:31:24.621466    4519 main.go:141] libmachine: Decoding PEM data...
	I0729 16:31:24.621487    4519 main.go:141] libmachine: Parsing certificate...
	I0729 16:31:24.621571    4519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:31:24.621615    4519 main.go:141] libmachine: Decoding PEM data...
	I0729 16:31:24.621631    4519 main.go:141] libmachine: Parsing certificate...
	I0729 16:31:24.622155    4519 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:31:24.779324    4519 main.go:141] libmachine: Creating SSH key...
	I0729 16:31:24.825231    4519 main.go:141] libmachine: Creating Disk image...
	I0729 16:31:24.825236    4519 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:31:24.825409    4519 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2
	I0729 16:31:24.834788    4519 main.go:141] libmachine: STDOUT: 
	I0729 16:31:24.834819    4519 main.go:141] libmachine: STDERR: 
	I0729 16:31:24.834884    4519 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2 +20000M
	I0729 16:31:24.842950    4519 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:31:24.842963    4519 main.go:141] libmachine: STDERR: 
	I0729 16:31:24.842977    4519 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2
	I0729 16:31:24.842982    4519 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:31:24.842997    4519 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:31:24.843034    4519 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:26:07:51:0e:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/test-preload-416000/disk.qcow2
	I0729 16:31:24.844807    4519 main.go:141] libmachine: STDOUT: 
	I0729 16:31:24.844821    4519 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:31:24.844834    4519 client.go:171] duration metric: took 223.556542ms to LocalClient.Create
	I0729 16:31:26.845281    4519 start.go:128] duration metric: took 2.282723167s to createHost
	I0729 16:31:26.845375    4519 start.go:83] releasing machines lock for "test-preload-416000", held for 2.283238416s
	W0729 16:31:26.845658    4519 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-416000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-416000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:31:26.858288    4519 out.go:177] 
	W0729 16:31:26.862345    4519 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:31:26.862383    4519 out.go:239] * 
	* 
	W0729 16:31:26.864059    4519 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:31:26.869575    4519 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-416000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-29 16:31:26.887636 -0700 PDT m=+2698.435427792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-416000 -n test-preload-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-416000 -n test-preload-416000: exit status 7 (66.593042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-416000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-416000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-416000
--- FAIL: TestPreload (9.93s)

                                                
                                    
x
+
TestScheduledStopUnix (10.06s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-225000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-225000 --memory=2048 --driver=qemu2 : exit status 80 (9.921630417s)

                                                
                                                
-- stdout --
	* [scheduled-stop-225000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-225000" primary control-plane node in "scheduled-stop-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-225000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-225000" primary control-plane node in "scheduled-stop-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 16:31:36.957064 -0700 PDT m=+2708.505160042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-225000 -n scheduled-stop-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-225000 -n scheduled-stop-225000: exit status 7 (65.202541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-225000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-225000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-225000
--- FAIL: TestScheduledStopUnix (10.06s)

                                                
                                    
x
+
TestSkaffold (13.11s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3542069256 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3542069256 version: (1.070032125s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-172000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-172000 --memory=2600 --driver=qemu2 : exit status 80 (9.8110475s)

                                                
                                                
-- stdout --
	* [skaffold-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-172000" primary control-plane node in "skaffold-172000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-172000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-172000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-172000" primary control-plane node in "skaffold-172000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-172000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-172000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 16:31:50.073811 -0700 PDT m=+2721.622302542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-172000 -n skaffold-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-172000 -n skaffold-172000: exit status 7 (60.923209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-172000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-172000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-172000
--- FAIL: TestSkaffold (13.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (604.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1562102404 start -p running-upgrade-896000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1562102404 start -p running-upgrade-896000 --memory=2200 --vm-driver=qemu2 : (1m8.064638542s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-896000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0729 16:34:21.218956    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:34:39.539719    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-896000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.959385458s)

                                                
                                                
-- stdout --
	* [running-upgrade-896000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-896000" primary control-plane node in "running-upgrade-896000" cluster
	* Updating the running qemu2 "running-upgrade-896000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:33:44.247269    4979 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:33:44.247398    4979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:33:44.247403    4979 out.go:304] Setting ErrFile to fd 2...
	I0729 16:33:44.247406    4979 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:33:44.247526    4979 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:33:44.248497    4979 out.go:298] Setting JSON to false
	I0729 16:33:44.265344    4979 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3791,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:33:44.265410    4979 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:33:44.269668    4979 out.go:177] * [running-upgrade-896000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:33:44.277612    4979 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:33:44.277658    4979 notify.go:220] Checking for updates...
	I0729 16:33:44.286560    4979 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:33:44.290577    4979 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:33:44.293577    4979 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:33:44.297561    4979 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:33:44.301497    4979 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:33:44.305788    4979 config.go:182] Loaded profile config "running-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:33:44.309585    4979 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 16:33:44.312600    4979 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:33:44.313969    4979 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:33:44.320551    4979 start.go:297] selected driver: qemu2
	I0729 16:33:44.320557    4979 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50279 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:33:44.320605    4979 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:33:44.323023    4979 cni.go:84] Creating CNI manager for ""
	I0729 16:33:44.323043    4979 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:33:44.323066    4979 start.go:340] cluster config:
	{Name:running-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50279 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:33:44.323114    4979 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:33:44.331536    4979 out.go:177] * Starting "running-upgrade-896000" primary control-plane node in "running-upgrade-896000" cluster
	I0729 16:33:44.335563    4979 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:33:44.335580    4979 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 16:33:44.335592    4979 cache.go:56] Caching tarball of preloaded images
	I0729 16:33:44.335656    4979 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:33:44.335665    4979 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 16:33:44.335724    4979 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/config.json ...
	I0729 16:33:44.336215    4979 start.go:360] acquireMachinesLock for running-upgrade-896000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:33:44.336250    4979 start.go:364] duration metric: took 30µs to acquireMachinesLock for "running-upgrade-896000"
	I0729 16:33:44.336260    4979 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:33:44.336266    4979 fix.go:54] fixHost starting: 
	I0729 16:33:44.336933    4979 fix.go:112] recreateIfNeeded on running-upgrade-896000: state=Running err=<nil>
	W0729 16:33:44.336941    4979 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:33:44.341554    4979 out.go:177] * Updating the running qemu2 "running-upgrade-896000" VM ...
	I0729 16:33:44.349578    4979 machine.go:94] provisionDockerMachine start ...
	I0729 16:33:44.349615    4979 main.go:141] libmachine: Using SSH client type: native
	I0729 16:33:44.349761    4979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100832a10] 0x100835270 <nil>  [] 0s} localhost 50247 <nil> <nil>}
	I0729 16:33:44.349766    4979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 16:33:44.399482    4979 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-896000
	
	I0729 16:33:44.399498    4979 buildroot.go:166] provisioning hostname "running-upgrade-896000"
	I0729 16:33:44.399537    4979 main.go:141] libmachine: Using SSH client type: native
	I0729 16:33:44.399651    4979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100832a10] 0x100835270 <nil>  [] 0s} localhost 50247 <nil> <nil>}
	I0729 16:33:44.399659    4979 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-896000 && echo "running-upgrade-896000" | sudo tee /etc/hostname
	I0729 16:33:44.453523    4979 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-896000
	
	I0729 16:33:44.453569    4979 main.go:141] libmachine: Using SSH client type: native
	I0729 16:33:44.453679    4979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100832a10] 0x100835270 <nil>  [] 0s} localhost 50247 <nil> <nil>}
	I0729 16:33:44.453687    4979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-896000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-896000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-896000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 16:33:44.504416    4979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:33:44.504427    4979 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19348-1218/.minikube CaCertPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19348-1218/.minikube}
	I0729 16:33:44.504434    4979 buildroot.go:174] setting up certificates
	I0729 16:33:44.504439    4979 provision.go:84] configureAuth start
	I0729 16:33:44.504445    4979 provision.go:143] copyHostCerts
	I0729 16:33:44.504525    4979 exec_runner.go:144] found /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.pem, removing ...
	I0729 16:33:44.504531    4979 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.pem
	I0729 16:33:44.504676    4979 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.pem (1082 bytes)
	I0729 16:33:44.504884    4979 exec_runner.go:144] found /Users/jenkins/minikube-integration/19348-1218/.minikube/cert.pem, removing ...
	I0729 16:33:44.504888    4979 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19348-1218/.minikube/cert.pem
	I0729 16:33:44.504931    4979 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19348-1218/.minikube/cert.pem (1123 bytes)
	I0729 16:33:44.505036    4979 exec_runner.go:144] found /Users/jenkins/minikube-integration/19348-1218/.minikube/key.pem, removing ...
	I0729 16:33:44.505039    4979 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19348-1218/.minikube/key.pem
	I0729 16:33:44.505083    4979 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19348-1218/.minikube/key.pem (1675 bytes)
	I0729 16:33:44.505179    4979 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-896000 san=[127.0.0.1 localhost minikube running-upgrade-896000]
	I0729 16:33:44.674005    4979 provision.go:177] copyRemoteCerts
	I0729 16:33:44.674049    4979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 16:33:44.674057    4979 sshutil.go:53] new ssh client: &{IP:localhost Port:50247 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/running-upgrade-896000/id_rsa Username:docker}
	I0729 16:33:44.702204    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 16:33:44.708994    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 16:33:44.715488    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 16:33:44.722427    4979 provision.go:87] duration metric: took 217.989375ms to configureAuth
	I0729 16:33:44.722436    4979 buildroot.go:189] setting minikube options for container-runtime
	I0729 16:33:44.722551    4979 config.go:182] Loaded profile config "running-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:33:44.722580    4979 main.go:141] libmachine: Using SSH client type: native
	I0729 16:33:44.722666    4979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100832a10] 0x100835270 <nil>  [] 0s} localhost 50247 <nil> <nil>}
	I0729 16:33:44.722672    4979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 16:33:44.771670    4979 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 16:33:44.771679    4979 buildroot.go:70] root file system type: tmpfs
	I0729 16:33:44.771725    4979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 16:33:44.771764    4979 main.go:141] libmachine: Using SSH client type: native
	I0729 16:33:44.771870    4979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100832a10] 0x100835270 <nil>  [] 0s} localhost 50247 <nil> <nil>}
	I0729 16:33:44.771902    4979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 16:33:44.826548    4979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 16:33:44.826610    4979 main.go:141] libmachine: Using SSH client type: native
	I0729 16:33:44.826716    4979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100832a10] 0x100835270 <nil>  [] 0s} localhost 50247 <nil> <nil>}
	I0729 16:33:44.826724    4979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 16:33:44.876716    4979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:33:44.876728    4979 machine.go:97] duration metric: took 527.159ms to provisionDockerMachine
	I0729 16:33:44.876733    4979 start.go:293] postStartSetup for "running-upgrade-896000" (driver="qemu2")
	I0729 16:33:44.876739    4979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 16:33:44.876793    4979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 16:33:44.876803    4979 sshutil.go:53] new ssh client: &{IP:localhost Port:50247 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/running-upgrade-896000/id_rsa Username:docker}
	I0729 16:33:44.904391    4979 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 16:33:44.905964    4979 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 16:33:44.905973    4979 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19348-1218/.minikube/addons for local assets ...
	I0729 16:33:44.906057    4979 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19348-1218/.minikube/files for local assets ...
	I0729 16:33:44.906146    4979 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem -> 17142.pem in /etc/ssl/certs
	I0729 16:33:44.906241    4979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 16:33:44.909297    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem --> /etc/ssl/certs/17142.pem (1708 bytes)
	I0729 16:33:44.916861    4979 start.go:296] duration metric: took 40.121167ms for postStartSetup
	I0729 16:33:44.916881    4979 fix.go:56] duration metric: took 580.632667ms for fixHost
	I0729 16:33:44.916924    4979 main.go:141] libmachine: Using SSH client type: native
	I0729 16:33:44.917034    4979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100832a10] 0x100835270 <nil>  [] 0s} localhost 50247 <nil> <nil>}
	I0729 16:33:44.917038    4979 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 16:33:44.967069    4979 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722296024.637284097
	
	I0729 16:33:44.967078    4979 fix.go:216] guest clock: 1722296024.637284097
	I0729 16:33:44.967082    4979 fix.go:229] Guest: 2024-07-29 16:33:44.637284097 -0700 PDT Remote: 2024-07-29 16:33:44.916882 -0700 PDT m=+0.689623042 (delta=-279.597903ms)
	I0729 16:33:44.967092    4979 fix.go:200] guest clock delta is within tolerance: -279.597903ms
	I0729 16:33:44.967114    4979 start.go:83] releasing machines lock for "running-upgrade-896000", held for 630.878459ms
	I0729 16:33:44.967171    4979 ssh_runner.go:195] Run: cat /version.json
	I0729 16:33:44.967175    4979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 16:33:44.967180    4979 sshutil.go:53] new ssh client: &{IP:localhost Port:50247 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/running-upgrade-896000/id_rsa Username:docker}
	I0729 16:33:44.967197    4979 sshutil.go:53] new ssh client: &{IP:localhost Port:50247 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/running-upgrade-896000/id_rsa Username:docker}
	W0729 16:33:44.967720    4979 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50247: connect: connection refused
	I0729 16:33:44.967743    4979 retry.go:31] will retry after 289.3864ms: dial tcp [::1]:50247: connect: connection refused
	W0729 16:33:45.291594    4979 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 16:33:45.291743    4979 ssh_runner.go:195] Run: systemctl --version
	I0729 16:33:45.294712    4979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 16:33:45.297232    4979 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 16:33:45.297268    4979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 16:33:45.301162    4979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 16:33:45.306645    4979 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 16:33:45.306652    4979 start.go:495] detecting cgroup driver to use...
	I0729 16:33:45.306727    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:33:45.312995    4979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 16:33:45.316187    4979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 16:33:45.319486    4979 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 16:33:45.319512    4979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 16:33:45.322797    4979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:33:45.325777    4979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 16:33:45.328533    4979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:33:45.331408    4979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 16:33:45.334709    4979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 16:33:45.337556    4979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 16:33:45.340374    4979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 16:33:45.343537    4979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 16:33:45.346480    4979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 16:33:45.349077    4979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:33:45.424596    4979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 16:33:45.435082    4979 start.go:495] detecting cgroup driver to use...
	I0729 16:33:45.435161    4979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 16:33:45.439908    4979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:33:45.445445    4979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 16:33:45.451710    4979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:33:45.456208    4979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:33:45.460353    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:33:45.465668    4979 ssh_runner.go:195] Run: which cri-dockerd
	I0729 16:33:45.466958    4979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 16:33:45.469590    4979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 16:33:45.474267    4979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 16:33:45.551769    4979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 16:33:45.633097    4979 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 16:33:45.633155    4979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 16:33:45.638591    4979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:33:45.714473    4979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:33:48.287272    4979 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5728585s)
	I0729 16:33:48.287358    4979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 16:33:48.292168    4979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 16:33:48.299516    4979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:33:48.304675    4979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 16:33:48.382943    4979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 16:33:48.447082    4979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:33:48.509124    4979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 16:33:48.514634    4979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:33:48.519382    4979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:33:48.581455    4979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 16:33:48.621381    4979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 16:33:48.621451    4979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 16:33:48.625026    4979 start.go:563] Will wait 60s for crictl version
	I0729 16:33:48.625082    4979 ssh_runner.go:195] Run: which crictl
	I0729 16:33:48.626471    4979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 16:33:48.638559    4979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 16:33:48.638629    4979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:33:48.651340    4979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:33:48.672754    4979 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 16:33:48.672880    4979 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 16:33:48.674265    4979 kubeadm.go:883] updating cluster {Name:running-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50279 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 16:33:48.674328    4979 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:33:48.674375    4979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:33:48.684528    4979 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:33:48.684536    4979 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:33:48.684582    4979 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:33:48.687697    4979 ssh_runner.go:195] Run: which lz4
	I0729 16:33:48.688878    4979 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 16:33:48.690062    4979 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 16:33:48.690074    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 16:33:49.604114    4979 docker.go:649] duration metric: took 915.295333ms to copy over tarball
	I0729 16:33:49.604171    4979 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 16:33:50.731950    4979 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.127799625s)
	I0729 16:33:50.731966    4979 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 16:33:50.747826    4979 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:33:50.751260    4979 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 16:33:50.756194    4979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:33:50.818796    4979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:33:52.034020    4979 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.215244167s)
	I0729 16:33:52.034116    4979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:33:52.058123    4979 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:33:52.058137    4979 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:33:52.058142    4979 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 16:33:52.062049    4979 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:33:52.063778    4979 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:33:52.066227    4979 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:33:52.066263    4979 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:33:52.068475    4979 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:33:52.068539    4979 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:33:52.069993    4979 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:33:52.070034    4979 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:33:52.071493    4979 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:33:52.071583    4979 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:33:52.072683    4979 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 16:33:52.072703    4979 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:33:52.073626    4979 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:33:52.073791    4979 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:33:52.074765    4979 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 16:33:52.075750    4979 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:33:52.543386    4979 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:33:52.563100    4979 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:33:52.567706    4979 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 16:33:52.567748    4979 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:33:52.567821    4979 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:33:52.575780    4979 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:33:52.579430    4979 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0729 16:33:52.588930    4979 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 16:33:52.589088    4979 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:33:52.591991    4979 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 16:33:52.592411    4979 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 16:33:52.592433    4979 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:33:52.592476    4979 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:33:52.599392    4979 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 16:33:52.612385    4979 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 16:33:52.614508    4979 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 16:33:52.614524    4979 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:33:52.614552    4979 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:33:52.615963    4979 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 16:33:52.615973    4979 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:33:52.616001    4979 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:33:52.616101    4979 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 16:33:52.616111    4979 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:33:52.616133    4979 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:33:52.620097    4979 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 16:33:52.637589    4979 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 16:33:52.637610    4979 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 16:33:52.637665    4979 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 16:33:52.641811    4979 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 16:33:52.641824    4979 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 16:33:52.641830    4979 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:33:52.641875    4979 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 16:33:52.648907    4979 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 16:33:52.649011    4979 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 16:33:52.649032    4979 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:33:52.656823    4979 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 16:33:52.656943    4979 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 16:33:52.659017    4979 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 16:33:52.659040    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 16:33:52.659162    4979 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 16:33:52.659170    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 16:33:52.659175    4979 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 16:33:52.671501    4979 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 16:33:52.671514    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 16:33:52.716965    4979 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 16:33:52.728416    4979 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:33:52.728434    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 16:33:52.766638    4979 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0729 16:33:52.918650    4979 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 16:33:52.918788    4979 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:33:52.935127    4979 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 16:33:52.935154    4979 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:33:52.935216    4979 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:33:53.614885    4979 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 16:33:53.615193    4979 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:33:53.621358    4979 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 16:33:53.621404    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 16:33:53.680518    4979 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:33:53.680531    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 16:33:53.919436    4979 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 16:33:53.919471    4979 cache_images.go:92] duration metric: took 1.86137925s to LoadCachedImages
	W0729 16:33:53.919512    4979 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0729 16:33:53.919518    4979 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 16:33:53.919567    4979 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-896000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 16:33:53.919627    4979 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 16:33:53.933224    4979 cni.go:84] Creating CNI manager for ""
	I0729 16:33:53.933238    4979 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:33:53.933244    4979 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 16:33:53.933253    4979 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-896000 NodeName:running-upgrade-896000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 16:33:53.933320    4979 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-896000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 16:33:53.933379    4979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 16:33:53.936143    4979 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 16:33:53.936167    4979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 16:33:53.939047    4979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 16:33:53.944125    4979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 16:33:53.949101    4979 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 16:33:53.954198    4979 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 16:33:53.955582    4979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:33:54.027511    4979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:33:54.032975    4979 certs.go:68] Setting up /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000 for IP: 10.0.2.15
	I0729 16:33:54.032983    4979 certs.go:194] generating shared ca certs ...
	I0729 16:33:54.032991    4979 certs.go:226] acquiring lock for ca certs: {Name:mk96bd81121b57115fda9376f192a645eb60e2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:33:54.033135    4979 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.key
	I0729 16:33:54.033169    4979 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/proxy-client-ca.key
	I0729 16:33:54.033175    4979 certs.go:256] generating profile certs ...
	I0729 16:33:54.033233    4979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/client.key
	I0729 16:33:54.033250    4979 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.key.1ba13787
	I0729 16:33:54.033259    4979 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.crt.1ba13787 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 16:33:54.249861    4979 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.crt.1ba13787 ...
	I0729 16:33:54.249873    4979 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.crt.1ba13787: {Name:mk222b3f06f2dcbc89f45d0de9db3a21b6e37113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:33:54.250154    4979 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.key.1ba13787 ...
	I0729 16:33:54.250159    4979 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.key.1ba13787: {Name:mk565ab68ee312ae7952692f8e294ccac6a1827a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:33:54.250294    4979 certs.go:381] copying /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.crt.1ba13787 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.crt
	I0729 16:33:54.253295    4979 certs.go:385] copying /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.key.1ba13787 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.key
	I0729 16:33:54.253453    4979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/proxy-client.key
	I0729 16:33:54.253579    4979 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/1714.pem (1338 bytes)
	W0729 16:33:54.253602    4979 certs.go:480] ignoring /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/1714_empty.pem, impossibly tiny 0 bytes
	I0729 16:33:54.253608    4979 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 16:33:54.253627    4979 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem (1082 bytes)
	I0729 16:33:54.253653    4979 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem (1123 bytes)
	I0729 16:33:54.253671    4979 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/key.pem (1675 bytes)
	I0729 16:33:54.253712    4979 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem (1708 bytes)
	I0729 16:33:54.254056    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 16:33:54.262235    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 16:33:54.269015    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 16:33:54.275647    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 16:33:54.283242    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 16:33:54.290026    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 16:33:54.296558    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 16:33:54.303399    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 16:33:54.310890    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/1714.pem --> /usr/share/ca-certificates/1714.pem (1338 bytes)
	I0729 16:33:54.317683    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem --> /usr/share/ca-certificates/17142.pem (1708 bytes)
	I0729 16:33:54.324243    4979 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 16:33:54.331250    4979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 16:33:54.336160    4979 ssh_runner.go:195] Run: openssl version
	I0729 16:33:54.338034    4979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 16:33:54.341125    4979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:33:54.342850    4979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:33:54.342875    4979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:33:54.345005    4979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 16:33:54.347633    4979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1714.pem && ln -fs /usr/share/ca-certificates/1714.pem /etc/ssl/certs/1714.pem"
	I0729 16:33:54.351037    4979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1714.pem
	I0729 16:33:54.352489    4979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 22:54 /usr/share/ca-certificates/1714.pem
	I0729 16:33:54.352506    4979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1714.pem
	I0729 16:33:54.354338    4979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1714.pem /etc/ssl/certs/51391683.0"
	I0729 16:33:54.357031    4979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17142.pem && ln -fs /usr/share/ca-certificates/17142.pem /etc/ssl/certs/17142.pem"
	I0729 16:33:54.359850    4979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17142.pem
	I0729 16:33:54.361521    4979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 22:54 /usr/share/ca-certificates/17142.pem
	I0729 16:33:54.361539    4979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17142.pem
	I0729 16:33:54.363534    4979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17142.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 16:33:54.366826    4979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 16:33:54.368505    4979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 16:33:54.370338    4979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 16:33:54.372304    4979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 16:33:54.374185    4979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 16:33:54.376203    4979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 16:33:54.378113    4979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 16:33:54.380039    4979 kubeadm.go:392] StartCluster: {Name:running-upgrade-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50279 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-896000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:33:54.380109    4979 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:33:54.390797    4979 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 16:33:54.394504    4979 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 16:33:54.394509    4979 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 16:33:54.394531    4979 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 16:33:54.397311    4979 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:33:54.397563    4979 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-896000" does not appear in /Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:33:54.397620    4979 kubeconfig.go:62] /Users/jenkins/minikube-integration/19348-1218/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-896000" cluster setting kubeconfig missing "running-upgrade-896000" context setting]
	I0729 16:33:54.397749    4979 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/kubeconfig: {Name:mkadb977bd50641dea3f6c522a66ad62f461af12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:33:54.398420    4979 kapi.go:59] client config for running-upgrade-896000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/client.key", CAFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101bc8080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:33:54.398747    4979 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 16:33:54.401984    4979 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-896000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 16:33:54.401990    4979 kubeadm.go:1160] stopping kube-system containers ...
	I0729 16:33:54.402030    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:33:54.412894    4979 docker.go:483] Stopping containers: [5754f35da11d 1fb17c1a2afc bc211ff6bebe 0146be8aa7de a4c6fb95565a 16be0b768ead ec95d1ba66c7 2e9d7d1522c7 7cc104746214 bd1a8e2d767b 2fc9c34a112c 7d26b6a55d97 4929f58f66bb 3eaca96bc8df]
	I0729 16:33:54.412967    4979 ssh_runner.go:195] Run: docker stop 5754f35da11d 1fb17c1a2afc bc211ff6bebe 0146be8aa7de a4c6fb95565a 16be0b768ead ec95d1ba66c7 2e9d7d1522c7 7cc104746214 bd1a8e2d767b 2fc9c34a112c 7d26b6a55d97 4929f58f66bb 3eaca96bc8df
	I0729 16:33:54.424310    4979 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 16:33:54.531482    4979 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:33:54.536153    4979 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 29 23:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 29 23:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 29 23:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 29 23:33 /etc/kubernetes/scheduler.conf
	
	I0729 16:33:54.536196    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/admin.conf
	I0729 16:33:54.539936    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:33:54.539966    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:33:54.543630    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/kubelet.conf
	I0729 16:33:54.547261    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:33:54.547286    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:33:54.550950    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/controller-manager.conf
	I0729 16:33:54.554308    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:33:54.554332    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:33:54.557407    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/scheduler.conf
	I0729 16:33:54.560143    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:33:54.560165    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:33:54.563123    4979 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:33:54.566234    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:33:54.588113    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:33:55.326118    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:33:55.514414    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:33:55.539813    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:33:55.565341    4979 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:33:55.565423    4979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:33:56.067897    4979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:33:56.567477    4979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:33:56.572175    4979 api_server.go:72] duration metric: took 1.006866834s to wait for apiserver process to appear ...
	I0729 16:33:56.572185    4979 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:33:56.572202    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:01.574313    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:01.574383    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:06.575036    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:06.575111    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:11.575874    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:11.575920    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:16.576752    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:16.576846    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:21.578291    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:21.578416    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:26.580315    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:26.580400    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:31.582890    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:31.582968    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:36.585535    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:36.585615    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:41.588139    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:41.588224    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:46.590696    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:46.590820    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:51.591505    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:51.591566    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:34:56.593738    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:34:56.593928    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:34:56.610037    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:34:56.610159    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:34:56.623224    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:34:56.623292    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:34:56.634148    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:34:56.634218    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:34:56.644626    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:34:56.644692    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:34:56.655450    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:34:56.655522    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:34:56.665750    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:34:56.665817    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:34:56.675938    4979 logs.go:276] 0 containers: []
	W0729 16:34:56.675948    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:34:56.676005    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:34:56.691310    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:34:56.691330    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:34:56.691336    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:34:56.695676    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:34:56.695685    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:34:56.765720    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:34:56.765737    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:34:56.779891    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:34:56.779901    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:34:56.791433    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:34:56.791448    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:34:56.806520    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:34:56.806534    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:34:56.818210    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:34:56.818222    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:34:56.829447    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:34:56.829458    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:34:56.841501    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:34:56.841513    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:34:56.878228    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:34:56.878235    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:34:56.891981    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:34:56.891992    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:34:56.912782    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:34:56.912797    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:34:56.935783    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:34:56.935799    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:34:56.946939    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:34:56.946956    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:34:56.959202    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:34:56.959216    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:34:56.983616    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:34:56.983623    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:34:57.002234    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:34:57.002248    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:34:59.516803    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:35:04.519140    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:35:04.519545    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:35:04.558937    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:35:04.559070    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:35:04.580543    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:35:04.580664    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:35:04.599408    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:35:04.599477    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:35:04.611432    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:35:04.611511    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:35:04.622103    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:35:04.622171    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:35:04.632432    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:35:04.632504    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:35:04.642748    4979 logs.go:276] 0 containers: []
	W0729 16:35:04.642759    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:35:04.642818    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:35:04.653342    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:35:04.653357    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:35:04.653362    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:35:04.664383    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:35:04.664394    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:35:04.683247    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:35:04.683256    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:35:04.697544    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:35:04.697554    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:35:04.702085    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:35:04.702094    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:35:04.717008    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:35:04.717019    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:35:04.737604    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:35:04.737616    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:35:04.755897    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:35:04.755910    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:35:04.767899    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:35:04.767910    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:35:04.803365    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:35:04.803373    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:35:04.817282    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:35:04.817295    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:35:04.834821    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:35:04.834833    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:35:04.852331    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:35:04.852344    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:35:04.888552    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:35:04.888563    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:35:04.901005    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:35:04.901015    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:35:04.915717    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:35:04.915728    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:35:04.928176    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:35:04.928186    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:35:07.455634    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:35:12.458166    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:35:12.458617    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:35:12.497166    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:35:12.497313    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:35:12.518744    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:35:12.518852    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:35:12.534446    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:35:12.534525    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:35:12.550204    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:35:12.550277    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:35:12.560839    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:35:12.560905    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:35:12.571871    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:35:12.571940    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:35:12.582823    4979 logs.go:276] 0 containers: []
	W0729 16:35:12.582838    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:35:12.582896    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:35:12.596491    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:35:12.596512    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:35:12.596520    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:35:12.634622    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:35:12.634632    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:35:12.646956    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:35:12.646967    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:35:12.663667    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:35:12.663678    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:35:12.680961    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:35:12.680973    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:35:12.694351    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:35:12.694364    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:35:12.706310    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:35:12.706322    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:35:12.726007    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:35:12.726017    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:35:12.743219    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:35:12.743229    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:35:12.754967    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:35:12.754980    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:35:12.781658    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:35:12.781667    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:35:12.785892    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:35:12.785897    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:35:12.799546    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:35:12.799557    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:35:12.814770    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:35:12.814783    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:35:12.826492    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:35:12.826503    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:35:12.861431    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:35:12.861443    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:35:12.875854    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:35:12.875864    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:35:15.388759    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:35:20.391365    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:35:20.391851    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:35:20.431712    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:35:20.431850    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:35:20.453101    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:35:20.453214    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:35:20.468132    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:35:20.468202    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:35:20.480579    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:35:20.480649    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:35:20.492348    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:35:20.492414    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:35:20.507475    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:35:20.507532    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:35:20.520679    4979 logs.go:276] 0 containers: []
	W0729 16:35:20.520693    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:35:20.520750    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:35:20.531661    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:35:20.531678    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:35:20.531683    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:35:20.536110    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:35:20.536118    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:35:20.550112    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:35:20.550125    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:35:20.561465    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:35:20.561475    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:35:20.575796    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:35:20.575809    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:35:20.602256    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:35:20.602271    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:35:20.639921    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:35:20.639935    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:35:20.657201    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:35:20.657215    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:35:20.669046    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:35:20.669057    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:35:20.680189    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:35:20.680202    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:35:20.691935    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:35:20.691944    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:35:20.712818    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:35:20.712830    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:35:20.726998    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:35:20.727012    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:35:20.738865    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:35:20.738878    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:35:20.751062    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:35:20.751072    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:35:20.788294    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:35:20.788304    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:35:20.799609    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:35:20.799622    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:35:23.319201    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:35:28.321878    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:35:28.322267    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:35:28.362467    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:35:28.362600    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:35:28.383585    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:35:28.383707    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:35:28.398975    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:35:28.399052    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:35:28.411704    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:35:28.411779    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:35:28.423860    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:35:28.423927    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:35:28.434872    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:35:28.434941    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:35:28.444601    4979 logs.go:276] 0 containers: []
	W0729 16:35:28.444613    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:35:28.444674    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:35:28.455102    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:35:28.455120    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:35:28.455125    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:35:28.474540    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:35:28.474549    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:35:28.491989    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:35:28.491999    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:35:28.503140    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:35:28.503150    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:35:28.514525    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:35:28.514534    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:35:28.533943    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:35:28.533953    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:35:28.548750    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:35:28.548759    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:35:28.560008    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:35:28.560020    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:35:28.586350    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:35:28.586362    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:35:28.590680    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:35:28.590689    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:35:28.604333    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:35:28.604347    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:35:28.616495    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:35:28.616505    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:35:28.636781    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:35:28.636790    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:35:28.650127    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:35:28.650139    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:35:28.686960    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:35:28.686968    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:35:28.721927    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:35:28.721940    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:35:28.733274    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:35:28.733287    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:35:31.247395    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:35:36.250081    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:35:36.250525    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:35:36.289337    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:35:36.289497    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:35:36.310638    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:35:36.310749    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:35:36.325971    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:35:36.326052    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:35:36.338664    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:35:36.338739    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:35:36.349959    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:35:36.350035    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:35:36.360754    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:35:36.360838    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:35:36.371272    4979 logs.go:276] 0 containers: []
	W0729 16:35:36.371283    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:35:36.371343    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:35:36.382056    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:35:36.382077    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:35:36.382082    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:35:36.396083    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:35:36.396092    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:35:36.417934    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:35:36.417948    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:35:36.429023    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:35:36.429035    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:35:36.443486    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:35:36.443519    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:35:36.463174    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:35:36.463186    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:35:36.489790    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:35:36.489803    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:35:36.501509    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:35:36.501524    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:35:36.523852    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:35:36.523864    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:35:36.537926    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:35:36.537937    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:35:36.549491    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:35:36.549500    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:35:36.553834    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:35:36.553839    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:35:36.588950    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:35:36.588963    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:35:36.600887    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:35:36.600901    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:35:36.613292    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:35:36.613302    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:35:36.624682    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:35:36.624696    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:35:36.636087    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:35:36.636097    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:35:39.175375    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:35:44.178108    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:35:44.178549    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:35:44.217820    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:35:44.217957    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:35:44.239906    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:35:44.240030    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:35:44.255982    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:35:44.256061    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:35:44.268265    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:35:44.268333    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:35:44.278929    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:35:44.278996    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:35:44.290230    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:35:44.290302    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:35:44.304841    4979 logs.go:276] 0 containers: []
	W0729 16:35:44.304852    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:35:44.304909    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:35:44.315456    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:35:44.315473    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:35:44.315478    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:35:44.327972    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:35:44.327983    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:35:44.345380    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:35:44.345392    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:35:44.349594    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:35:44.349600    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:35:44.383765    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:35:44.383777    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:35:44.395625    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:35:44.395637    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:35:44.410508    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:35:44.410520    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:35:44.449750    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:35:44.449764    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:35:44.468100    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:35:44.468111    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:35:44.480879    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:35:44.480893    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:35:44.504651    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:35:44.504658    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:35:44.518503    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:35:44.518516    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:35:44.538445    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:35:44.538457    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:35:44.557668    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:35:44.557681    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:35:44.569249    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:35:44.569262    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:35:44.583146    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:35:44.583160    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:35:44.595816    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:35:44.595830    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:35:47.109554    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:35:52.112298    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:35:52.112715    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:35:52.155422    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:35:52.155549    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:35:52.174025    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:35:52.174109    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:35:52.188005    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:35:52.188078    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:35:52.199167    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:35:52.199239    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:35:52.209563    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:35:52.209623    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:35:52.219667    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:35:52.219725    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:35:52.229779    4979 logs.go:276] 0 containers: []
	W0729 16:35:52.229793    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:35:52.229848    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:35:52.240209    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:35:52.240227    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:35:52.240232    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:35:52.257057    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:35:52.257068    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:35:52.271370    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:35:52.271383    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:35:52.288329    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:35:52.288342    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:35:52.313733    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:35:52.313739    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:35:52.329893    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:35:52.329904    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:35:52.341558    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:35:52.341569    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:35:52.379493    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:35:52.379502    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:35:52.383731    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:35:52.383741    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:35:52.420377    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:35:52.420387    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:35:52.438360    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:35:52.438372    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:35:52.449613    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:35:52.449624    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:35:52.461944    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:35:52.461955    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:35:52.486286    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:35:52.486296    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:35:52.498091    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:35:52.498101    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:35:52.512107    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:35:52.512118    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:35:52.523760    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:35:52.523771    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:35:55.036873    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:36:00.039155    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:36:00.039628    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:36:00.075720    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:36:00.075821    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:36:00.094265    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:36:00.094366    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:36:00.123847    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:36:00.123923    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:36:00.151556    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:36:00.151623    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:36:00.172163    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:36:00.172240    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:36:00.183597    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:36:00.183667    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:36:00.194108    4979 logs.go:276] 0 containers: []
	W0729 16:36:00.194123    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:36:00.194185    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:36:00.208931    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:36:00.208953    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:36:00.208958    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:36:00.234061    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:36:00.234068    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:36:00.248779    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:36:00.248792    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:36:00.269800    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:36:00.269813    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:36:00.287902    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:36:00.287916    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:36:00.299656    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:36:00.299670    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:36:00.314385    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:36:00.314395    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:36:00.325996    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:36:00.326010    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:36:00.360838    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:36:00.360848    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:36:00.374518    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:36:00.374528    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:36:00.385900    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:36:00.385912    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:36:00.400672    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:36:00.400683    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:36:00.418317    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:36:00.418329    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:36:00.453900    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:36:00.453908    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:36:00.465887    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:36:00.465898    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:36:00.477400    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:36:00.477412    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:36:00.489348    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:36:00.489360    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:36:02.996191    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:36:07.998340    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:36:07.998465    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:36:08.015198    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:36:08.015270    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:36:08.034068    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:36:08.034138    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:36:08.046098    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:36:08.046187    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:36:08.058403    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:36:08.058482    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:36:08.070950    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:36:08.071018    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:36:08.083007    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:36:08.083079    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:36:08.094604    4979 logs.go:276] 0 containers: []
	W0729 16:36:08.094616    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:36:08.094674    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:36:08.106425    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:36:08.106471    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:36:08.106476    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:36:08.122300    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:36:08.122315    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:36:08.135870    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:36:08.135885    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:36:08.148899    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:36:08.148914    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:36:08.178726    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:36:08.178744    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:36:08.191872    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:36:08.191890    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:36:08.211772    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:36:08.211787    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:36:08.239434    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:36:08.239453    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:36:08.266023    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:36:08.266040    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:36:08.311412    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:36:08.311433    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:36:08.316395    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:36:08.316402    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:36:08.331525    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:36:08.331541    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:36:08.348572    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:36:08.348584    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:36:08.360981    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:36:08.360992    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:36:08.374069    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:36:08.374081    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:36:08.410523    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:36:08.410536    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:36:08.428876    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:36:08.428893    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:36:10.946572    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:36:15.948694    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:36:15.949211    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:36:15.989581    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:36:15.989711    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:36:16.007030    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:36:16.007125    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:36:16.020209    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:36:16.020286    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:36:16.032257    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:36:16.032336    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:36:16.042873    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:36:16.042939    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:36:16.053388    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:36:16.053457    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:36:16.063966    4979 logs.go:276] 0 containers: []
	W0729 16:36:16.063978    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:36:16.064038    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:36:16.074319    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:36:16.074338    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:36:16.074343    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:36:16.091734    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:36:16.091747    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:36:16.103332    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:36:16.103344    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:36:16.127391    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:36:16.127400    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:36:16.131345    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:36:16.131352    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:36:16.165259    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:36:16.165275    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:36:16.185513    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:36:16.185529    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:36:16.197288    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:36:16.197302    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:36:16.211426    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:36:16.211443    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:36:16.224899    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:36:16.224912    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:36:16.244837    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:36:16.244851    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:36:16.257581    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:36:16.257592    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:36:16.269228    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:36:16.269237    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:36:16.302993    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:36:16.303003    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:36:16.316849    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:36:16.316860    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:36:16.328623    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:36:16.328634    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:36:16.365206    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:36:16.365214    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:36:18.878562    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:36:23.880623    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:36:23.880743    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:36:23.892872    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:36:23.892962    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:36:23.904415    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:36:23.904502    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:36:23.916042    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:36:23.916120    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:36:23.928564    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:36:23.928651    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:36:23.950289    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:36:23.950364    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:36:23.966448    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:36:23.966521    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:36:23.978392    4979 logs.go:276] 0 containers: []
	W0729 16:36:23.978408    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:36:23.978466    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:36:23.990231    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:36:23.990251    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:36:23.990256    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:36:24.004045    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:36:24.004059    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:36:24.026576    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:36:24.026594    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:36:24.041778    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:36:24.041791    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:36:24.062121    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:36:24.062136    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:36:24.076068    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:36:24.076082    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:36:24.094486    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:36:24.094502    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:36:24.106376    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:36:24.106387    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:36:24.132778    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:36:24.132798    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:36:24.172519    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:36:24.172531    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:36:24.188428    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:36:24.188440    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:36:24.209408    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:36:24.209419    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:36:24.247885    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:36:24.247894    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:36:24.260572    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:36:24.260585    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:36:24.272486    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:36:24.272500    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:36:24.277338    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:36:24.277345    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:36:24.289136    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:36:24.289149    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:36:26.802150    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:36:31.803602    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:36:31.804065    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:36:31.844381    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:36:31.844490    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:36:31.861431    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:36:31.861515    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:36:31.878295    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:36:31.878372    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:36:31.889489    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:36:31.889572    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:36:31.904794    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:36:31.904863    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:36:31.915592    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:36:31.915662    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:36:31.926351    4979 logs.go:276] 0 containers: []
	W0729 16:36:31.926364    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:36:31.926423    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:36:31.937001    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:36:31.937019    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:36:31.937025    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:36:31.941343    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:36:31.941349    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:36:31.955627    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:36:31.955639    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:36:31.966985    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:36:31.966998    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:36:31.983425    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:36:31.983435    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:36:32.000945    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:36:32.000958    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:36:32.012622    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:36:32.012631    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:36:32.024099    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:36:32.024114    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:36:32.035285    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:36:32.035299    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:36:32.047671    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:36:32.047681    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:36:32.062863    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:36:32.062875    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:36:32.075054    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:36:32.075065    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:36:32.110902    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:36:32.110912    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:36:32.145960    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:36:32.145973    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:36:32.169798    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:36:32.169813    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:36:32.189867    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:36:32.189879    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:36:32.212568    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:36:32.212582    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:36:34.738293    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:36:39.740772    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:36:39.740885    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:36:39.752555    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:36:39.752626    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:36:39.763673    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:36:39.763753    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:36:39.775210    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:36:39.775284    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:36:39.787030    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:36:39.787105    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:36:39.803426    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:36:39.803496    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:36:39.815267    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:36:39.815335    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:36:39.827525    4979 logs.go:276] 0 containers: []
	W0729 16:36:39.827537    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:36:39.827594    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:36:39.839255    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:36:39.839274    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:36:39.839280    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:36:39.851442    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:36:39.851456    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:36:39.865196    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:36:39.865209    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:36:39.889940    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:36:39.889958    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:36:39.930306    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:36:39.930324    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:36:39.967856    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:36:39.967869    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:36:39.986353    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:36:39.986368    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:36:40.011219    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:36:40.011236    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:36:40.024285    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:36:40.024299    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:36:40.029340    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:36:40.029353    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:36:40.048226    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:36:40.048238    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:36:40.063790    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:36:40.063802    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:36:40.079945    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:36:40.079955    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:36:40.091890    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:36:40.091904    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:36:40.104467    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:36:40.104480    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:36:40.139269    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:36:40.139281    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:36:40.151268    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:36:40.151280    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:36:42.671881    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:36:47.674127    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:36:47.674556    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:36:47.720213    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:36:47.720359    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:36:47.740937    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:36:47.741042    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:36:47.762126    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:36:47.762209    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:36:47.774410    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:36:47.774477    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:36:47.784737    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:36:47.784799    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:36:47.795580    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:36:47.795643    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:36:47.807093    4979 logs.go:276] 0 containers: []
	W0729 16:36:47.807102    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:36:47.807158    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:36:47.822460    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:36:47.822478    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:36:47.822484    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:36:47.858214    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:36:47.858226    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:36:47.862357    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:36:47.862365    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:36:47.898599    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:36:47.898611    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:36:47.913004    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:36:47.913014    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:36:47.925265    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:36:47.925276    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:36:47.936504    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:36:47.936513    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:36:47.959932    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:36:47.959939    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:36:47.977224    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:36:47.977233    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:36:47.991193    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:36:47.991205    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:36:48.002853    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:36:48.002869    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:36:48.023296    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:36:48.023311    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:36:48.035144    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:36:48.035156    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:36:48.053145    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:36:48.053156    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:36:48.065740    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:36:48.065750    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:36:48.079705    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:36:48.079716    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:36:48.091194    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:36:48.091204    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:36:50.605113    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:36:55.607256    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:36:55.607362    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:36:55.618893    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:36:55.618968    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:36:55.629313    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:36:55.629392    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:36:55.647577    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:36:55.647650    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:36:55.658694    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:36:55.658763    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:36:55.669258    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:36:55.669330    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:36:55.679988    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:36:55.680062    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:36:55.690893    4979 logs.go:276] 0 containers: []
	W0729 16:36:55.690907    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:36:55.690968    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:36:55.708108    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:36:55.708130    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:36:55.708136    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:36:55.743636    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:36:55.743651    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:36:55.748182    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:36:55.748190    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:36:55.762198    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:36:55.762209    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:36:55.774209    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:36:55.774221    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:36:55.788570    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:36:55.788581    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:36:55.800467    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:36:55.800477    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:36:55.834805    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:36:55.834820    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:36:55.846508    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:36:55.846523    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:36:55.858708    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:36:55.858719    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:36:55.875616    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:36:55.875627    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:36:55.895239    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:36:55.895251    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:36:55.912826    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:36:55.912838    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:36:55.923655    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:36:55.923666    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:36:55.946430    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:36:55.946438    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:36:55.958998    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:36:55.959010    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:36:55.972803    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:36:55.972813    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:36:58.489749    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:03.492381    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:03.492547    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:03.504779    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:03.504862    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:03.516107    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:03.516181    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:03.527417    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:03.527482    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:03.538432    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:03.538511    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:03.551941    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:03.552013    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:03.562446    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:03.562528    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:03.572970    4979 logs.go:276] 0 containers: []
	W0729 16:37:03.572981    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:03.573052    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:03.586107    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:03.586126    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:03.586132    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:03.607968    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:03.607981    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:03.621787    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:03.621798    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:03.658732    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:03.658744    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:03.678764    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:03.678776    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:03.697224    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:03.697234    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:03.709953    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:03.709969    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:03.722197    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:03.722209    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:03.733955    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:03.733966    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:03.738493    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:03.738502    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:03.752279    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:03.752290    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:03.768745    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:03.768757    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:03.780084    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:03.780094    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:03.805124    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:03.805133    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:03.817503    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:03.817514    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:03.855537    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:03.855548    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:03.883380    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:03.883391    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:06.401162    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:11.403441    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:11.403899    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:11.442249    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:11.442393    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:11.464177    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:11.464293    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:11.478996    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:11.479067    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:11.491216    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:11.491293    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:11.501890    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:11.501957    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:11.512683    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:11.512754    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:11.526862    4979 logs.go:276] 0 containers: []
	W0729 16:37:11.526873    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:11.526930    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:11.537822    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:11.537841    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:11.537846    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:11.555708    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:11.555719    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:11.579212    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:11.579220    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:11.615372    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:11.615383    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:11.636529    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:11.636542    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:11.655337    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:11.655349    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:11.667200    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:11.667211    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:11.681296    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:11.681308    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:11.700531    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:11.700545    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:11.712573    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:11.712586    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:11.723891    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:11.723906    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:11.728031    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:11.728039    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:11.742457    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:11.742468    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:11.754298    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:11.754308    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:11.791870    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:11.791876    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:11.805637    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:11.805647    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:11.818207    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:11.818220    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:14.335916    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:19.337267    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:19.337367    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:19.349769    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:19.349841    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:19.363307    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:19.363383    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:19.375312    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:19.375412    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:19.387070    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:19.387145    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:19.399202    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:19.399276    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:19.411738    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:19.411810    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:19.422655    4979 logs.go:276] 0 containers: []
	W0729 16:37:19.422667    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:19.422734    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:19.433954    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:19.433973    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:19.433978    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:19.472405    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:19.472420    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:19.489620    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:19.489632    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:19.502655    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:19.502666    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:19.516125    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:19.516137    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:19.529267    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:19.529279    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:19.544773    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:19.544785    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:19.561169    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:19.561183    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:19.592594    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:19.592612    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:19.616088    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:19.616103    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:19.623357    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:19.623370    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:19.661819    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:19.661836    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:19.683536    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:19.683549    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:19.697794    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:19.697810    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:19.723690    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:19.723720    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:19.747500    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:19.747512    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:19.763117    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:19.763132    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:22.289342    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:27.290624    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:27.290905    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:27.316872    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:27.317013    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:27.334922    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:27.335011    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:27.348440    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:27.348515    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:27.359634    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:27.359705    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:27.370591    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:27.370657    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:27.381463    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:27.381530    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:27.391785    4979 logs.go:276] 0 containers: []
	W0729 16:37:27.391801    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:27.391861    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:27.401626    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:27.401656    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:27.401662    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:27.415918    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:27.415929    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:27.433946    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:27.433957    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:27.445506    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:27.445520    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:27.457129    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:27.457139    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:27.475066    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:27.475075    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:27.489409    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:27.489419    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:27.504212    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:27.504222    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:27.521282    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:27.521294    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:27.532598    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:27.532612    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:27.570055    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:27.570066    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:27.606580    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:27.606593    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:27.625278    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:27.625294    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:27.648141    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:27.648147    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:27.659480    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:27.659495    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:27.663815    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:27.663823    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:27.682882    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:27.682892    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:30.206891    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:35.209039    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:35.209177    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:35.222921    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:35.223011    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:35.235144    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:35.235220    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:35.246366    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:35.246437    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:35.257334    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:35.257406    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:35.267993    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:35.268061    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:35.278514    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:35.278581    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:35.288813    4979 logs.go:276] 0 containers: []
	W0729 16:37:35.288829    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:35.288888    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:35.300047    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:35.300065    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:35.300070    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:35.311432    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:35.311444    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:35.328536    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:35.328547    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:35.340012    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:35.340022    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:35.351888    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:35.351902    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:35.356025    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:35.356033    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:35.391163    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:35.391174    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:35.411630    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:35.411640    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:35.426834    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:35.426845    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:35.438464    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:35.438473    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:35.476783    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:35.476792    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:35.490395    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:35.490405    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:35.511060    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:35.511070    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:35.533263    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:35.533271    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:35.547147    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:35.547163    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:35.558486    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:35.558498    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:35.571356    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:35.571370    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:38.090078    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:43.092636    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:43.092811    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:43.107530    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:43.107607    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:43.119293    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:43.119365    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:43.129484    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:43.129544    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:43.140198    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:43.140266    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:43.151328    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:43.151399    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:43.161872    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:43.161939    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:43.172591    4979 logs.go:276] 0 containers: []
	W0729 16:37:43.172604    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:43.172662    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:43.182730    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:43.182748    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:43.182754    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:43.194332    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:43.194342    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:43.212394    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:43.212403    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:43.223337    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:43.223349    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:43.241096    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:43.241107    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:43.257460    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:43.257471    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:43.268938    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:43.268949    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:43.305280    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:43.305292    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:43.345071    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:43.345085    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:43.359097    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:43.359110    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:43.371493    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:43.371510    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:43.393848    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:43.393855    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:43.398136    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:43.398145    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:43.413324    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:43.413333    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:43.433233    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:43.433251    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:43.448877    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:43.448890    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:43.461202    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:43.461214    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:45.975500    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:50.977619    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:50.977835    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:50.991504    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:50.991587    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:51.003201    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:51.003266    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:51.013507    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:51.013577    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:51.024489    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:51.024565    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:51.034962    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:51.035037    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:51.045077    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:51.045140    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:51.055265    4979 logs.go:276] 0 containers: []
	W0729 16:37:51.055277    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:51.055335    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:51.067016    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:51.067033    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:51.067038    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:51.079318    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:51.079331    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:51.083685    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:51.083693    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:51.100867    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:51.100879    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:51.119100    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:51.119112    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:51.132973    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:51.132984    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:51.147975    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:51.147986    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:51.159320    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:51.159331    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:51.182273    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:51.182279    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:51.216264    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:51.216275    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:51.230374    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:51.230386    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:51.245324    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:51.245337    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:51.256610    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:51.256624    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:51.268136    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:51.268147    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:51.279918    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:51.279929    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:51.292380    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:51.292391    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:51.328828    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:51.328841    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:53.854924    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:58.857177    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:58.857265    4979 kubeadm.go:597] duration metric: took 4m4.470128125s to restartPrimaryControlPlane
	W0729 16:37:58.857346    4979 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 16:37:58.857384    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 16:37:59.854257    4979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:37:59.859202    4979 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:37:59.861972    4979 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:37:59.864594    4979 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:37:59.864600    4979 kubeadm.go:157] found existing configuration files:
	
	I0729 16:37:59.864625    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/admin.conf
	I0729 16:37:59.867852    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:37:59.867875    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:37:59.870928    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/kubelet.conf
	I0729 16:37:59.873317    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:37:59.873341    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:37:59.876373    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/controller-manager.conf
	I0729 16:37:59.879478    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:37:59.879504    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:37:59.882451    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/scheduler.conf
	I0729 16:37:59.884973    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:37:59.884996    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:37:59.888105    4979 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 16:37:59.904811    4979 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 16:37:59.904841    4979 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 16:37:59.955984    4979 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 16:37:59.956154    4979 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 16:37:59.956213    4979 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 16:38:00.005951    4979 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 16:38:00.009227    4979 out.go:204]   - Generating certificates and keys ...
	I0729 16:38:00.009260    4979 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 16:38:00.009293    4979 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 16:38:00.009342    4979 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 16:38:00.009375    4979 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 16:38:00.009417    4979 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 16:38:00.009448    4979 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 16:38:00.009484    4979 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 16:38:00.009524    4979 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 16:38:00.009567    4979 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 16:38:00.009605    4979 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 16:38:00.009626    4979 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 16:38:00.009652    4979 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 16:38:00.062254    4979 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 16:38:00.169428    4979 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 16:38:00.227460    4979 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 16:38:00.297978    4979 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 16:38:00.327345    4979 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 16:38:00.327650    4979 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 16:38:00.327677    4979 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 16:38:00.400552    4979 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 16:38:00.404621    4979 out.go:204]   - Booting up control plane ...
	I0729 16:38:00.404669    4979 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 16:38:00.404711    4979 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 16:38:00.404745    4979 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 16:38:00.404794    4979 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 16:38:00.404884    4979 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 16:38:04.905563    4979 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502184 seconds
	I0729 16:38:04.905637    4979 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 16:38:04.909556    4979 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 16:38:05.428594    4979 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 16:38:05.428950    4979 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-896000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 16:38:05.931862    4979 kubeadm.go:310] [bootstrap-token] Using token: tpzil2.duhzl0ieoas6xbte
	I0729 16:38:05.938159    4979 out.go:204]   - Configuring RBAC rules ...
	I0729 16:38:05.938236    4979 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 16:38:05.938302    4979 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 16:38:05.944662    4979 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 16:38:05.945574    4979 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 16:38:05.946406    4979 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 16:38:05.947310    4979 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 16:38:05.951017    4979 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 16:38:06.111149    4979 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 16:38:06.335766    4979 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 16:38:06.336245    4979 kubeadm.go:310] 
	I0729 16:38:06.336349    4979 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 16:38:06.336388    4979 kubeadm.go:310] 
	I0729 16:38:06.336432    4979 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 16:38:06.336437    4979 kubeadm.go:310] 
	I0729 16:38:06.336449    4979 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 16:38:06.336477    4979 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 16:38:06.336504    4979 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 16:38:06.336524    4979 kubeadm.go:310] 
	I0729 16:38:06.336659    4979 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 16:38:06.336673    4979 kubeadm.go:310] 
	I0729 16:38:06.336721    4979 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 16:38:06.336738    4979 kubeadm.go:310] 
	I0729 16:38:06.336805    4979 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 16:38:06.336850    4979 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 16:38:06.336890    4979 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 16:38:06.336893    4979 kubeadm.go:310] 
	I0729 16:38:06.336933    4979 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 16:38:06.336970    4979 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 16:38:06.336973    4979 kubeadm.go:310] 
	I0729 16:38:06.337021    4979 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tpzil2.duhzl0ieoas6xbte \
	I0729 16:38:06.337075    4979 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9cecc1c3dd985258772234c33c785f9bcad6eff884cc7ff19b79a518c1cf4e1 \
	I0729 16:38:06.337092    4979 kubeadm.go:310] 	--control-plane 
	I0729 16:38:06.337097    4979 kubeadm.go:310] 
	I0729 16:38:06.337149    4979 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 16:38:06.337158    4979 kubeadm.go:310] 
	I0729 16:38:06.337197    4979 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tpzil2.duhzl0ieoas6xbte \
	I0729 16:38:06.337255    4979 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9cecc1c3dd985258772234c33c785f9bcad6eff884cc7ff19b79a518c1cf4e1 
	I0729 16:38:06.337338    4979 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 16:38:06.337345    4979 cni.go:84] Creating CNI manager for ""
	I0729 16:38:06.337353    4979 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:38:06.343715    4979 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:38:06.351683    4979 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:38:06.354743    4979 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:38:06.359550    4979 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:38:06.359617    4979 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:38:06.359627    4979 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-896000 minikube.k8s.io/updated_at=2024_07_29T16_38_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9 minikube.k8s.io/name=running-upgrade-896000 minikube.k8s.io/primary=true
	I0729 16:38:06.402068    4979 ops.go:34] apiserver oom_adj: -16
	I0729 16:38:06.402083    4979 kubeadm.go:1113] duration metric: took 42.501917ms to wait for elevateKubeSystemPrivileges
	I0729 16:38:06.402123    4979 kubeadm.go:394] duration metric: took 4m12.0296915s to StartCluster
	I0729 16:38:06.402137    4979 settings.go:142] acquiring lock: {Name:mk1df9c174f764d47de5a2c25ea0f0fc28c1d98c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:38:06.402229    4979 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:38:06.402598    4979 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/kubeconfig: {Name:mkadb977bd50641dea3f6c522a66ad62f461af12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:38:06.402820    4979 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:38:06.402830    4979 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 16:38:06.402868    4979 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-896000"
	I0729 16:38:06.402881    4979 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-896000"
	I0729 16:38:06.402880    4979 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-896000"
	W0729 16:38:06.402886    4979 addons.go:243] addon storage-provisioner should already be in state true
	I0729 16:38:06.402893    4979 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-896000"
	I0729 16:38:06.402898    4979 host.go:66] Checking if "running-upgrade-896000" exists ...
	I0729 16:38:06.402940    4979 config.go:182] Loaded profile config "running-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:38:06.403771    4979 kapi.go:59] client config for running-upgrade-896000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/client.key", CAFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101bc8080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:38:06.403891    4979 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-896000"
	W0729 16:38:06.403895    4979 addons.go:243] addon default-storageclass should already be in state true
	I0729 16:38:06.403902    4979 host.go:66] Checking if "running-upgrade-896000" exists ...
	I0729 16:38:06.406707    4979 out.go:177] * Verifying Kubernetes components...
	I0729 16:38:06.407007    4979 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:38:06.410909    4979 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:38:06.410915    4979 sshutil.go:53] new ssh client: &{IP:localhost Port:50247 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/running-upgrade-896000/id_rsa Username:docker}
	I0729 16:38:06.413629    4979 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:38:06.417651    4979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:38:06.421547    4979 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:38:06.421557    4979 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:38:06.421564    4979 sshutil.go:53] new ssh client: &{IP:localhost Port:50247 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/running-upgrade-896000/id_rsa Username:docker}
	I0729 16:38:06.492494    4979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:38:06.497848    4979 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:38:06.497883    4979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:38:06.501744    4979 api_server.go:72] duration metric: took 98.915458ms to wait for apiserver process to appear ...
	I0729 16:38:06.501751    4979 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:38:06.501758    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:06.530223    4979 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:38:06.545836    4979 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:38:11.503765    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:11.503792    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:16.503940    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:16.503964    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:21.504274    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:21.504294    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:26.504575    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:26.504600    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:31.505071    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:31.505090    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:36.505663    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:36.505685    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 16:38:36.873923    4979 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 16:38:36.878517    4979 out.go:177] * Enabled addons: storage-provisioner
	I0729 16:38:36.892447    4979 addons.go:510] duration metric: took 30.490543083s for enable addons: enabled=[storage-provisioner]
	I0729 16:38:41.506428    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:41.506475    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:46.507594    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:46.507638    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:51.508991    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:51.509018    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:56.510737    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:56.510758    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:01.512798    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:01.512839    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:06.514913    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:06.514996    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:06.525687    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:06.525757    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:06.537536    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:06.537606    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:06.548097    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:06.548168    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:06.558459    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:06.558519    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:06.576234    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:06.576303    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:06.586496    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:06.586562    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:06.600267    4979 logs.go:276] 0 containers: []
	W0729 16:39:06.600282    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:06.600336    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:06.611069    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:06.611084    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:06.611089    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:06.622794    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:06.622805    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:06.634963    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:06.634974    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:06.646316    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:06.646328    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:06.681409    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:06.681417    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:06.686080    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:06.686090    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:06.720949    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:06.720961    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:06.735996    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:06.736007    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:06.753135    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:06.753145    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:06.764952    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:06.764963    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:06.790218    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:06.790229    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:06.804132    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:06.804146    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:06.817555    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:06.817568    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:09.331077    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:14.333224    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:14.333379    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:14.345468    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:14.345539    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:14.355945    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:14.356016    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:14.366777    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:14.366845    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:14.377054    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:14.377116    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:14.387468    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:14.387532    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:14.400575    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:14.400640    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:14.410934    4979 logs.go:276] 0 containers: []
	W0729 16:39:14.410944    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:14.410993    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:14.421476    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:14.421492    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:14.421498    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:14.426432    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:14.426441    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:14.445777    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:14.445789    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:14.457984    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:14.457995    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:14.473681    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:14.473694    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:14.485063    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:14.485073    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:14.500915    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:14.500929    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:14.525747    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:14.525757    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:14.560000    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:14.560008    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:14.574992    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:14.575003    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:14.586282    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:14.586295    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:14.605232    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:14.605243    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:14.617232    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:14.617242    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:17.154888    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:22.157097    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:22.157258    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:22.169485    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:22.169557    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:22.180490    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:22.180560    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:22.190850    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:22.190917    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:22.203342    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:22.203416    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:22.213591    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:22.213661    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:22.223827    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:22.223905    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:22.233879    4979 logs.go:276] 0 containers: []
	W0729 16:39:22.233891    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:22.233949    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:22.245211    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:22.245227    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:22.245233    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:22.257009    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:22.257019    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:22.294422    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:22.294434    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:22.310053    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:22.310064    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:22.324450    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:22.324465    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:22.339815    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:22.339830    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:22.357211    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:22.357226    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:22.369072    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:22.369083    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:22.394530    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:22.394538    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:22.428947    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:22.428954    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:22.433978    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:22.433984    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:22.449909    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:22.449922    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:22.461992    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:22.462005    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:24.975529    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:29.976090    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:29.976183    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:29.986856    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:29.986927    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:29.998031    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:29.998107    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:30.013315    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:30.013388    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:30.024009    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:30.024072    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:30.034323    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:30.034396    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:30.044427    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:30.044495    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:30.054489    4979 logs.go:276] 0 containers: []
	W0729 16:39:30.054501    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:30.054560    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:30.064517    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:30.064532    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:30.064537    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:30.078663    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:30.078674    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:30.090214    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:30.090226    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:30.101765    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:30.101775    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:30.113641    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:30.113652    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:30.118090    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:30.118099    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:30.152176    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:30.152186    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:30.164905    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:30.164916    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:30.180173    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:30.180184    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:30.200720    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:30.200732    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:30.224554    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:30.224562    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:30.237473    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:30.237485    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:30.272345    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:30.272357    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:32.788206    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:37.790296    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:37.790388    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:37.802176    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:37.802248    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:37.813771    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:37.813848    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:37.825320    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:37.825411    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:37.841609    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:37.841683    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:37.853345    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:37.853416    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:37.865160    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:37.865238    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:37.875951    4979 logs.go:276] 0 containers: []
	W0729 16:39:37.875964    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:37.876023    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:37.886496    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:37.886511    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:37.886517    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:37.923192    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:37.923206    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:37.928516    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:37.928526    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:37.974925    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:37.974937    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:37.990537    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:37.990548    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:38.005612    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:38.005623    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:38.022114    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:38.022125    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:38.038015    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:38.038026    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:38.055595    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:38.055606    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:38.083001    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:38.083010    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:38.095077    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:38.095089    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:38.113022    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:38.113032    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:38.138202    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:38.138213    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:40.651324    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:45.651622    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:45.651696    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:45.663367    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:45.663441    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:45.674406    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:45.674476    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:45.686353    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:45.686425    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:45.697607    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:45.697677    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:45.709287    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:45.709357    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:45.725848    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:45.725911    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:45.738640    4979 logs.go:276] 0 containers: []
	W0729 16:39:45.738652    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:45.738708    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:45.751825    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:45.751844    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:45.751849    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:45.767195    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:45.767205    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:45.779633    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:45.779646    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:45.805721    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:45.805729    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:45.842053    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:45.842066    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:45.846847    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:45.846853    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:45.859264    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:45.859272    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:45.871941    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:45.871949    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:45.889172    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:45.889183    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:45.901642    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:45.901654    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:45.919685    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:45.919699    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:45.931590    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:45.931602    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:45.966692    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:45.966707    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:48.482443    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:53.484590    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:53.484673    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:53.496331    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:53.496406    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:53.507778    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:53.507849    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:53.518724    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:53.518803    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:53.529984    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:53.530055    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:53.541278    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:53.541352    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:53.553276    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:53.553347    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:53.564723    4979 logs.go:276] 0 containers: []
	W0729 16:39:53.564735    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:53.564799    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:53.576049    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:53.576063    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:53.576068    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:53.589019    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:53.589031    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:53.603943    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:53.603956    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:53.640019    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:53.640039    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:53.655395    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:53.655408    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:53.668753    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:53.668764    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:53.685476    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:53.685487    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:53.703711    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:53.703723    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:53.716442    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:53.716455    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:53.744717    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:53.744738    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:53.758759    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:53.758772    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:53.764061    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:53.764070    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:53.803360    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:53.803375    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:56.321257    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:01.323481    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:01.323601    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:01.337496    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:01.337579    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:01.350665    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:01.350737    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:01.362377    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:01.362443    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:01.373988    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:01.374055    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:01.386900    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:01.386978    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:01.399131    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:01.399203    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:01.410475    4979 logs.go:276] 0 containers: []
	W0729 16:40:01.410485    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:01.410543    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:01.421413    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:01.421430    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:01.421435    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:01.436785    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:01.436794    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:01.448782    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:01.448793    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:01.466705    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:01.466720    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:01.491771    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:01.491788    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:01.504564    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:01.504576    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:01.541392    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:01.541413    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:01.547498    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:01.547509    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:01.585090    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:01.585101    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:01.608544    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:01.608555    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:01.622496    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:01.622507    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:01.643239    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:01.643248    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:01.655688    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:01.655701    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:04.173184    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:09.174958    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:09.175286    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:09.193501    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:09.193583    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:09.207676    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:09.207746    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:09.219350    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:09.219416    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:09.229934    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:09.229999    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:09.242644    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:09.242710    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:09.253013    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:09.253080    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:09.263860    4979 logs.go:276] 0 containers: []
	W0729 16:40:09.263870    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:09.263900    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:09.279965    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:09.279989    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:09.279996    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:09.292577    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:09.292587    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:09.310840    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:09.310850    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:09.328482    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:09.328494    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:09.365669    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:09.365681    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:09.378230    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:09.378242    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:09.390510    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:09.390522    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:09.406783    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:09.406796    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:09.432291    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:09.432304    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:09.444278    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:09.444291    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:09.481170    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:09.481180    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:09.486634    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:09.486646    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:09.508089    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:09.508099    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:12.029502    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:17.031580    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:17.031872    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:17.047832    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:17.047919    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:17.060091    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:17.060171    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:17.071538    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:17.071610    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:17.082101    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:17.082172    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:17.092044    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:17.092120    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:17.102905    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:17.102973    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:17.112834    4979 logs.go:276] 0 containers: []
	W0729 16:40:17.112845    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:17.112903    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:17.123284    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:17.123301    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:17.123306    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:17.155836    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:17.155843    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:17.160330    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:17.160339    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:17.178748    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:17.178757    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:17.192137    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:17.192150    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:17.208760    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:17.208776    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:17.221415    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:17.221428    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:17.247331    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:17.247345    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:17.286535    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:17.286548    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:17.301347    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:17.301362    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:17.314953    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:17.314964    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:17.331960    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:17.331971    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:17.351297    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:17.351306    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:19.870313    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:24.872582    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:24.872794    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:24.889207    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:24.889304    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:24.902110    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:24.902191    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:24.912903    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:24.912975    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:24.923933    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:24.924003    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:24.936757    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:24.936822    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:24.947146    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:24.947208    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:24.958728    4979 logs.go:276] 0 containers: []
	W0729 16:40:24.958739    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:24.958799    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:24.969239    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:24.969259    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:24.969265    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:24.984218    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:24.984229    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:24.996272    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:24.996282    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:25.007682    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:25.007695    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:25.012310    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:25.012319    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:25.029553    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:40:25.029564    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:40:25.040769    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:25.040780    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:25.065583    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:25.065594    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:25.100351    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:25.100362    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:25.117113    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:25.117127    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:25.129787    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:25.129800    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:25.180868    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:40:25.180880    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:40:25.193453    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:25.193466    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:25.206341    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:25.206352    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:25.223882    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:25.223894    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:27.746909    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:32.749145    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:32.749326    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:32.762281    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:32.762358    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:32.773077    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:32.773147    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:32.783176    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:32.783249    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:32.793776    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:32.793842    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:32.803729    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:32.803795    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:32.814425    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:32.814492    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:32.828522    4979 logs.go:276] 0 containers: []
	W0729 16:40:32.828540    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:32.828591    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:32.839147    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:32.839165    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:32.839171    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:32.851247    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:32.851259    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:32.867307    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:32.867318    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:32.884095    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:32.884105    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:32.895399    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:32.895410    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:32.929175    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:32.929190    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:32.941089    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:32.941105    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:32.966361    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:32.966372    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:33.000094    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:33.000118    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:33.011805    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:40:33.011817    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:40:33.025890    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:40:33.025902    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:40:33.038606    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:33.038617    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:33.051756    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:33.051767    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:33.066778    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:33.066785    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:33.081997    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:33.082013    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:35.599815    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:40.601972    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:40.602148    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:40.618544    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:40.618630    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:40.631560    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:40.631633    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:40.642456    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:40.642522    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:40.656805    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:40.656879    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:40.669066    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:40.669138    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:40.679701    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:40.679765    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:40.689897    4979 logs.go:276] 0 containers: []
	W0729 16:40:40.689908    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:40.689965    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:40.700424    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:40.700441    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:40.700446    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:40.704732    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:40.704741    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:40.716161    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:40.716173    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:40.731973    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:40.731984    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:40.744256    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:40.744270    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:40.755673    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:40.755684    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:40.769644    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:40.769656    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:40.786309    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:40.786320    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:40.803515    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:40.803525    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:40.814924    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:40.814933    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:40.849686    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:40:40.849700    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:40:40.861490    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:40.861501    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:40.894182    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:40.894192    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:40.908137    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:40:40.908148    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:40:40.919913    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:40.919923    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:43.448820    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:48.451019    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:48.451190    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:48.463258    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:48.463329    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:48.474867    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:48.474939    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:48.485714    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:48.485794    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:48.496147    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:48.496216    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:48.505973    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:48.506048    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:48.516127    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:48.516191    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:48.526169    4979 logs.go:276] 0 containers: []
	W0729 16:40:48.526181    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:48.526233    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:48.536860    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:48.536877    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:48.536883    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:48.572941    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:40:48.572953    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:40:48.588374    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:48.588385    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:48.600753    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:48.600764    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:48.617878    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:48.617888    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:48.629495    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:48.629504    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:48.641608    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:48.641623    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:48.646540    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:48.646548    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:48.660829    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:40:48.660841    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:40:48.672361    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:48.672372    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:48.705336    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:48.705342    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:48.716428    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:48.716440    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:48.732131    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:48.732142    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:48.752824    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:48.752840    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:48.764484    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:48.764495    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:51.290089    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:56.292110    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:56.292275    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:56.306187    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:56.306269    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:56.318445    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:56.318523    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:56.329743    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:56.329816    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:56.339942    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:56.340014    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:56.354305    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:56.354377    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:56.364543    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:56.364609    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:56.374660    4979 logs.go:276] 0 containers: []
	W0729 16:40:56.374672    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:56.374727    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:56.384933    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:56.384948    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:56.384953    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:56.422998    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:56.423014    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:56.442232    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:56.442245    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:56.454702    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:56.454716    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:56.466796    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:56.466806    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:56.491700    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:56.491707    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:56.505150    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:40:56.505164    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:40:56.517237    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:56.517253    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:56.532803    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:56.532817    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:56.550087    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:56.550101    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:56.582983    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:56.582990    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:56.598683    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:40:56.598695    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:40:56.610325    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:56.610336    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:56.622000    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:56.622014    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:56.627176    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:56.627184    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:59.141106    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:04.143545    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:04.143991    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:04.190880    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:04.191025    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:04.210862    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:04.210952    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:04.225691    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:04.225763    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:04.240435    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:04.240503    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:04.251720    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:04.251792    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:04.263488    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:04.263554    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:04.274614    4979 logs.go:276] 0 containers: []
	W0729 16:41:04.274626    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:04.274681    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:04.285903    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:04.285921    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:04.285927    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:04.300781    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:04.300794    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:04.312485    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:04.312497    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:04.330935    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:04.330945    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:04.344472    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:04.344484    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:04.360123    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:04.360135    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:04.379222    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:04.379233    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:04.397540    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:04.397550    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:04.432554    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:04.432565    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:04.436984    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:04.436993    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:04.472281    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:04.472292    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:04.493587    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:04.493597    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:04.517594    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:04.517601    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:04.535542    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:04.535553    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:04.552597    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:04.552607    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:07.066468    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:12.068823    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:12.069216    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:12.104940    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:12.105068    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:12.123861    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:12.123962    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:12.138091    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:12.138168    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:12.154534    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:12.154603    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:12.172423    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:12.172494    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:12.183236    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:12.183313    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:12.195120    4979 logs.go:276] 0 containers: []
	W0729 16:41:12.195138    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:12.195204    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:12.205659    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:12.205677    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:12.205682    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:12.210351    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:12.210358    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:12.226513    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:12.226527    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:12.238505    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:12.238529    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:12.256899    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:12.256913    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:12.269127    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:12.269137    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:12.302620    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:12.302633    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:12.314626    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:12.314643    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:12.329667    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:12.329678    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:12.345107    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:12.345117    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:12.371222    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:12.371237    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:12.405211    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:12.405223    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:12.420719    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:12.420730    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:12.432278    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:12.432289    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:12.445279    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:12.445293    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:14.960544    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:19.962818    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:19.963037    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:19.984361    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:19.984460    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:19.999608    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:19.999693    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:20.012141    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:20.012210    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:20.022749    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:20.022821    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:20.033292    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:20.033366    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:20.044382    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:20.044455    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:20.054821    4979 logs.go:276] 0 containers: []
	W0729 16:41:20.054833    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:20.054890    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:20.065534    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:20.065553    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:20.065568    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:20.100300    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:20.100310    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:20.105097    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:20.105105    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:20.119813    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:20.119828    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:20.131801    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:20.131813    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:20.144404    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:20.144416    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:20.169774    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:20.169783    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:20.216101    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:20.216113    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:20.232328    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:20.232339    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:20.244056    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:20.244071    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:20.261609    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:20.261621    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:20.274384    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:20.274394    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:20.289748    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:20.289760    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:20.309364    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:20.309376    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:20.324879    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:20.324890    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:22.839008    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:27.841337    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:27.841603    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:27.869256    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:27.869377    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:27.886496    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:27.886588    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:27.899839    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:27.899921    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:27.911635    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:27.911701    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:27.922673    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:27.922736    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:27.933159    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:27.933233    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:27.950403    4979 logs.go:276] 0 containers: []
	W0729 16:41:27.950416    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:27.950470    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:27.960478    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:27.960493    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:27.960497    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:27.993640    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:27.993652    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:28.026897    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:28.026913    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:28.041010    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:28.041022    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:28.064971    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:28.064982    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:28.081708    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:28.081726    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:28.093610    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:28.093624    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:28.107395    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:28.107416    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:28.111967    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:28.111974    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:28.129714    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:28.129725    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:28.141366    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:28.141375    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:28.157018    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:28.157028    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:28.174579    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:28.174590    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:28.189142    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:28.189153    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:28.201675    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:28.201688    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:30.714340    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:35.716522    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:35.716663    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:35.730712    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:35.730785    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:35.742428    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:35.742498    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:35.761770    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:35.761843    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:35.772685    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:35.772752    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:35.784169    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:35.784238    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:35.800266    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:35.800338    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:35.810890    4979 logs.go:276] 0 containers: []
	W0729 16:41:35.810903    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:35.810964    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:35.821822    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:35.821841    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:35.821846    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:35.836927    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:35.836940    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:35.850320    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:35.850334    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:35.865020    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:35.865031    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:35.877288    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:35.877298    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:35.892774    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:35.892786    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:35.932673    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:35.932687    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:35.947762    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:35.947774    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:35.975584    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:35.975601    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:35.988368    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:35.988381    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:36.000601    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:36.000615    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:36.014771    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:36.014788    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:36.035059    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:36.035073    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:36.047261    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:36.047275    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:36.083044    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:36.083058    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:38.589053    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:43.591093    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:43.591234    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:43.603793    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:43.603869    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:43.614252    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:43.614324    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:43.625316    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:43.625397    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:43.635935    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:43.636006    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:43.646830    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:43.646898    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:43.657144    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:43.657213    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:43.667865    4979 logs.go:276] 0 containers: []
	W0729 16:41:43.667883    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:43.667945    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:43.678433    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:43.678450    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:43.678455    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:43.689794    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:43.689806    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:43.700890    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:43.700900    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:43.733910    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:43.733918    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:43.745596    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:43.745607    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:43.749887    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:43.749896    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:43.761244    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:43.761254    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:43.772962    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:43.772972    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:43.790832    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:43.790843    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:43.802526    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:43.802536    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:43.841436    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:43.841449    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:43.862063    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:43.862074    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:43.877807    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:43.877816    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:43.901447    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:43.901455    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:43.915656    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:43.915664    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:46.430003    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:51.430413    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:51.430501    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:51.445767    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:51.445848    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:51.458472    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:51.458544    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:51.469222    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:51.469294    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:51.479542    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:51.479615    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:51.493580    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:51.493655    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:51.507168    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:51.507233    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:51.516663    4979 logs.go:276] 0 containers: []
	W0729 16:41:51.516676    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:51.516737    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:51.527547    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:51.527566    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:51.527571    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:51.542868    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:51.542878    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:51.556734    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:51.556744    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:51.568853    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:51.568865    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:51.580092    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:51.580102    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:51.604131    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:51.604148    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:51.616399    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:51.616412    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:51.649698    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:51.649709    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:51.663669    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:51.663681    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:51.684300    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:51.684310    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:51.719635    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:51.719645    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:51.731920    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:51.731930    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:51.743856    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:51.743868    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:51.755616    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:51.755626    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:51.768228    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:51.768238    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:54.274987    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:59.277169    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:59.277373    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:59.297363    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:59.297459    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:59.311812    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:59.311885    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:59.323532    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:59.323606    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:59.334285    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:59.334355    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:59.345409    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:59.345474    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:59.356225    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:59.356293    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:59.367082    4979 logs.go:276] 0 containers: []
	W0729 16:41:59.367093    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:59.367148    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:59.377698    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:59.377715    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:59.377724    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:59.389171    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:59.389185    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:59.404630    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:59.404641    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:59.417654    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:59.417665    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:59.441888    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:59.441898    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:59.456523    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:59.456536    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:59.492028    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:59.492040    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:59.506136    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:59.506149    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:59.529000    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:59.529010    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:59.533260    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:59.533265    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:59.547744    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:59.547755    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:59.559693    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:59.559706    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:59.593586    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:59.593596    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:59.605046    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:59.605057    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:59.616985    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:59.616996    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:42:02.130171    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:07.132252    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:07.135872    4979 out.go:177] 
	W0729 16:42:07.139710    4979 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 16:42:07.139717    4979 out.go:239] * 
	* 
	W0729 16:42:07.140428    4979 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:42:07.151680    4979 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-896000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-29 16:42:07.250065 -0700 PDT m=+3338.817185042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-896000 -n running-upgrade-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-896000 -n running-upgrade-896000: exit status 2 (15.538370417s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-896000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-163000          | force-systemd-flag-163000 | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-113000              | force-systemd-env-113000  | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-113000           | force-systemd-env-113000  | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT | 29 Jul 24 16:32 PDT |
	| start   | -p docker-flags-942000                | docker-flags-942000       | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-163000             | force-systemd-flag-163000 | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-163000          | force-systemd-flag-163000 | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT | 29 Jul 24 16:32 PDT |
	| start   | -p cert-expiration-870000             | cert-expiration-870000    | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-942000 ssh               | docker-flags-942000       | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-942000 ssh               | docker-flags-942000       | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-942000                | docker-flags-942000       | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT | 29 Jul 24 16:32 PDT |
	| start   | -p cert-options-528000                | cert-options-528000       | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-528000 ssh               | cert-options-528000       | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-528000 -- sudo        | cert-options-528000       | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-528000                | cert-options-528000       | jenkins | v1.33.1 | 29 Jul 24 16:32 PDT | 29 Jul 24 16:32 PDT |
	| start   | -p running-upgrade-896000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 16:32 PDT | 29 Jul 24 16:33 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-896000             | running-upgrade-896000    | jenkins | v1.33.1 | 29 Jul 24 16:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-870000             | cert-expiration-870000    | jenkins | v1.33.1 | 29 Jul 24 16:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-870000             | cert-expiration-870000    | jenkins | v1.33.1 | 29 Jul 24 16:35 PDT | 29 Jul 24 16:35 PDT |
	| start   | -p kubernetes-upgrade-507000          | kubernetes-upgrade-507000 | jenkins | v1.33.1 | 29 Jul 24 16:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-507000          | kubernetes-upgrade-507000 | jenkins | v1.33.1 | 29 Jul 24 16:35 PDT | 29 Jul 24 16:35 PDT |
	| start   | -p kubernetes-upgrade-507000          | kubernetes-upgrade-507000 | jenkins | v1.33.1 | 29 Jul 24 16:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-507000          | kubernetes-upgrade-507000 | jenkins | v1.33.1 | 29 Jul 24 16:35 PDT | 29 Jul 24 16:35 PDT |
	| start   | -p stopped-upgrade-170000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 16:35 PDT | 29 Jul 24 16:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-170000 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 16:36 PDT | 29 Jul 24 16:36 PDT |
	| start   | -p stopped-upgrade-170000             | stopped-upgrade-170000    | jenkins | v1.33.1 | 29 Jul 24 16:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 16:36:56
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 16:36:56.113835    5115 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:36:56.113983    5115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:36:56.113986    5115 out.go:304] Setting ErrFile to fd 2...
	I0729 16:36:56.113989    5115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:36:56.114102    5115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:36:56.115167    5115 out.go:298] Setting JSON to false
	I0729 16:36:56.132597    5115 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3983,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:36:56.132675    5115 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:36:56.138051    5115 out.go:177] * [stopped-upgrade-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:36:56.146103    5115 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:36:56.146168    5115 notify.go:220] Checking for updates...
	I0729 16:36:56.153909    5115 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:36:56.157097    5115 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:36:56.160082    5115 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:36:56.163068    5115 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:36:56.166007    5115 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:36:56.169319    5115 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:36:56.172043    5115 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 16:36:56.175026    5115 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:36:56.179055    5115 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:36:56.186104    5115 start.go:297] selected driver: qemu2
	I0729 16:36:56.186112    5115 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:36:56.186169    5115 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:36:56.188834    5115 cni.go:84] Creating CNI manager for ""
	I0729 16:36:56.188853    5115 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:36:56.188887    5115 start.go:340] cluster config:
	{Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:36:56.188944    5115 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:36:56.196021    5115 out.go:177] * Starting "stopped-upgrade-170000" primary control-plane node in "stopped-upgrade-170000" cluster
	I0729 16:36:56.199982    5115 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:36:56.199996    5115 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 16:36:56.200002    5115 cache.go:56] Caching tarball of preloaded images
	I0729 16:36:56.200051    5115 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:36:56.200057    5115 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 16:36:56.200111    5115 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/config.json ...
	I0729 16:36:56.200598    5115 start.go:360] acquireMachinesLock for stopped-upgrade-170000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:36:56.200634    5115 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "stopped-upgrade-170000"
	I0729 16:36:56.200646    5115 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:36:56.200652    5115 fix.go:54] fixHost starting: 
	I0729 16:36:56.200770    5115 fix.go:112] recreateIfNeeded on stopped-upgrade-170000: state=Stopped err=<nil>
	W0729 16:36:56.200779    5115 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:36:56.208037    5115 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-170000" ...
	I0729 16:36:55.607256    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:36:55.607362    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:36:55.618893    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:36:55.618968    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:36:55.629313    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:36:55.629392    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:36:55.647577    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:36:55.647650    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:36:55.658694    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:36:55.658763    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:36:55.669258    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:36:55.669330    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:36:55.679988    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:36:55.680062    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:36:55.690893    4979 logs.go:276] 0 containers: []
	W0729 16:36:55.690907    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:36:55.690968    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:36:55.708108    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:36:55.708130    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:36:55.708136    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:36:55.743636    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:36:55.743651    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:36:55.748182    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:36:55.748190    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:36:55.762198    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:36:55.762209    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:36:55.774209    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:36:55.774221    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:36:55.788570    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:36:55.788581    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:36:55.800467    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:36:55.800477    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:36:55.834805    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:36:55.834820    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:36:55.846508    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:36:55.846523    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:36:55.858708    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:36:55.858719    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:36:55.875616    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:36:55.875627    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:36:55.895239    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:36:55.895251    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:36:55.912826    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:36:55.912838    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:36:55.923655    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:36:55.923666    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:36:55.946430    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:36:55.946438    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:36:55.958998    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:36:55.959010    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:36:55.972803    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:36:55.972813    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:36:58.489749    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:36:56.212078    5115 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:36:56.212168    5115 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50469-:22,hostfwd=tcp::50470-:2376,hostname=stopped-upgrade-170000 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/disk.qcow2
	I0729 16:36:56.260009    5115 main.go:141] libmachine: STDOUT: 
	I0729 16:36:56.260039    5115 main.go:141] libmachine: STDERR: 
	I0729 16:36:56.260045    5115 main.go:141] libmachine: Waiting for VM to start (ssh -p 50469 docker@127.0.0.1)...
	I0729 16:37:03.492381    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:03.492547    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:03.504779    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:03.504862    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:03.516107    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:03.516181    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:03.527417    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:03.527482    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:03.538432    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:03.538511    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:03.551941    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:03.552013    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:03.562446    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:03.562528    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:03.572970    4979 logs.go:276] 0 containers: []
	W0729 16:37:03.572981    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:03.573052    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:03.586107    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:03.586126    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:03.586132    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:03.607968    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:03.607981    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:03.621787    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:03.621798    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:03.658732    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:03.658744    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:03.678764    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:03.678776    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:03.697224    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:03.697234    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:03.709953    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:03.709969    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:03.722197    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:03.722209    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:03.733955    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:03.733966    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:03.738493    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:03.738502    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:03.752279    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:03.752290    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:03.768745    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:03.768757    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:03.780084    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:03.780094    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:03.805124    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:03.805133    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:03.817503    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:03.817514    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:03.855537    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:03.855548    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:03.883380    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:03.883391    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:06.401162    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:11.403441    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:11.403899    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:11.442249    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:11.442393    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:11.464177    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:11.464293    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:11.478996    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:11.479067    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:11.491216    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:11.491293    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:11.501890    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:11.501957    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:11.512683    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:11.512754    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:11.526862    4979 logs.go:276] 0 containers: []
	W0729 16:37:11.526873    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:11.526930    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:11.537822    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:11.537841    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:11.537846    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:11.555708    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:11.555719    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:11.579212    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:11.579220    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:11.615372    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:11.615383    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:11.636529    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:11.636542    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:11.655337    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:11.655349    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:11.667200    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:11.667211    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:11.681296    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:11.681308    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:11.700531    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:11.700545    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:11.712573    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:11.712586    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:11.723891    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:11.723906    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:11.728031    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:11.728039    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:11.742457    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:11.742468    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:11.754298    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:11.754308    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:11.791870    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:11.791876    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:11.805637    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:11.805647    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:11.818207    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:11.818220    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:16.110640    5115 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/config.json ...
	I0729 16:37:16.110997    5115 machine.go:94] provisionDockerMachine start ...
	I0729 16:37:16.111073    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.111274    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.111280    5115 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 16:37:16.165331    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 16:37:16.165342    5115 buildroot.go:166] provisioning hostname "stopped-upgrade-170000"
	I0729 16:37:16.165395    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.165506    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.165512    5115 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-170000 && echo "stopped-upgrade-170000" | sudo tee /etc/hostname
	I0729 16:37:16.220984    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-170000
	
	I0729 16:37:16.221032    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.221136    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.221144    5115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-170000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-170000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-170000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 16:37:16.275618    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:37:16.275628    5115 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19348-1218/.minikube CaCertPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19348-1218/.minikube}
	I0729 16:37:16.275635    5115 buildroot.go:174] setting up certificates
	I0729 16:37:16.275639    5115 provision.go:84] configureAuth start
	I0729 16:37:16.275650    5115 provision.go:143] copyHostCerts
	I0729 16:37:16.275718    5115 exec_runner.go:144] found /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.pem, removing ...
	I0729 16:37:16.275724    5115 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.pem
	I0729 16:37:16.276036    5115 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.pem (1082 bytes)
	I0729 16:37:16.276226    5115 exec_runner.go:144] found /Users/jenkins/minikube-integration/19348-1218/.minikube/cert.pem, removing ...
	I0729 16:37:16.276230    5115 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19348-1218/.minikube/cert.pem
	I0729 16:37:16.276284    5115 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19348-1218/.minikube/cert.pem (1123 bytes)
	I0729 16:37:16.276386    5115 exec_runner.go:144] found /Users/jenkins/minikube-integration/19348-1218/.minikube/key.pem, removing ...
	I0729 16:37:16.276389    5115 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19348-1218/.minikube/key.pem
	I0729 16:37:16.276436    5115 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19348-1218/.minikube/key.pem (1675 bytes)
	I0729 16:37:16.276558    5115 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-170000 san=[127.0.0.1 localhost minikube stopped-upgrade-170000]
	I0729 16:37:16.341478    5115 provision.go:177] copyRemoteCerts
	I0729 16:37:16.341507    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 16:37:16.341514    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0729 16:37:16.370825    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 16:37:16.377219    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 16:37:16.383896    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 16:37:16.391251    5115 provision.go:87] duration metric: took 115.599709ms to configureAuth
	I0729 16:37:16.391260    5115 buildroot.go:189] setting minikube options for container-runtime
	I0729 16:37:16.391369    5115 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:37:16.391413    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.391510    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.391514    5115 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 16:37:16.441325    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 16:37:16.441332    5115 buildroot.go:70] root file system type: tmpfs
	I0729 16:37:16.441382    5115 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 16:37:16.441423    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.441522    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.441554    5115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 16:37:16.497094    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 16:37:16.497138    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.497253    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.497261    5115 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 16:37:16.831352    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 16:37:16.831364    5115 machine.go:97] duration metric: took 720.379959ms to provisionDockerMachine
	I0729 16:37:16.831371    5115 start.go:293] postStartSetup for "stopped-upgrade-170000" (driver="qemu2")
	I0729 16:37:16.831378    5115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 16:37:16.831441    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 16:37:16.831463    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0729 16:37:16.859340    5115 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 16:37:16.860551    5115 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 16:37:16.860558    5115 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19348-1218/.minikube/addons for local assets ...
	I0729 16:37:16.860640    5115 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19348-1218/.minikube/files for local assets ...
	I0729 16:37:16.860757    5115 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem -> 17142.pem in /etc/ssl/certs
	I0729 16:37:16.860887    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 16:37:16.863921    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem --> /etc/ssl/certs/17142.pem (1708 bytes)
	I0729 16:37:16.871063    5115 start.go:296] duration metric: took 39.68825ms for postStartSetup
	I0729 16:37:16.871077    5115 fix.go:56] duration metric: took 20.6710505s for fixHost
	I0729 16:37:16.871111    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.871226    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.871231    5115 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 16:37:16.920941    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722296236.853030379
	
	I0729 16:37:16.920951    5115 fix.go:216] guest clock: 1722296236.853030379
	I0729 16:37:16.920955    5115 fix.go:229] Guest: 2024-07-29 16:37:16.853030379 -0700 PDT Remote: 2024-07-29 16:37:16.871079 -0700 PDT m=+20.776733126 (delta=-18.048621ms)
	I0729 16:37:16.920970    5115 fix.go:200] guest clock delta is within tolerance: -18.048621ms
	I0729 16:37:16.920973    5115 start.go:83] releasing machines lock for "stopped-upgrade-170000", held for 20.720958875s
	I0729 16:37:16.921035    5115 ssh_runner.go:195] Run: cat /version.json
	I0729 16:37:16.921038    5115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 16:37:16.921044    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0729 16:37:16.921056    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	W0729 16:37:16.921559    5115 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50469: connect: connection refused
	I0729 16:37:16.921581    5115 retry.go:31] will retry after 281.037628ms: dial tcp [::1]:50469: connect: connection refused
	W0729 16:37:17.249802    5115 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 16:37:17.249941    5115 ssh_runner.go:195] Run: systemctl --version
	I0729 16:37:17.254173    5115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 16:37:17.257089    5115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 16:37:17.257141    5115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 16:37:17.261610    5115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 16:37:17.268574    5115 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 16:37:17.268585    5115 start.go:495] detecting cgroup driver to use...
	I0729 16:37:17.268677    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:37:17.281971    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 16:37:17.285517    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 16:37:17.288871    5115 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 16:37:17.288907    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 16:37:17.292321    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:37:17.295950    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 16:37:17.298825    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:37:17.301844    5115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 16:37:17.305292    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 16:37:17.310548    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 16:37:17.314775    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 16:37:17.317914    5115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 16:37:17.320723    5115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 16:37:17.323687    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:17.386693    5115 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 16:37:17.393173    5115 start.go:495] detecting cgroup driver to use...
	I0729 16:37:17.393260    5115 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 16:37:17.398798    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:37:17.403912    5115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 16:37:17.409845    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:37:17.414006    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:37:17.418033    5115 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 16:37:17.479150    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:37:17.484179    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:37:17.489479    5115 ssh_runner.go:195] Run: which cri-dockerd
	I0729 16:37:17.490502    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 16:37:17.493202    5115 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 16:37:17.498096    5115 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 16:37:17.559553    5115 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 16:37:17.624261    5115 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 16:37:17.624328    5115 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 16:37:17.629662    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:17.690610    5115 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:37:18.858812    5115 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.168219792s)
	I0729 16:37:18.858874    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 16:37:18.864298    5115 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 16:37:18.870394    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:37:18.874670    5115 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 16:37:18.946682    5115 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 16:37:19.006235    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:19.067537    5115 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 16:37:19.073738    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:37:19.077986    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:19.143614    5115 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 16:37:19.184880    5115 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 16:37:19.184963    5115 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 16:37:19.187473    5115 start.go:563] Will wait 60s for crictl version
	I0729 16:37:19.187529    5115 ssh_runner.go:195] Run: which crictl
	I0729 16:37:19.188753    5115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 16:37:19.203480    5115 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 16:37:19.203550    5115 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:37:19.218799    5115 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:37:14.335916    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:19.241506    5115 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 16:37:19.241568    5115 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 16:37:19.242916    5115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:37:19.246750    5115 kubeadm.go:883] updating cluster {Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 16:37:19.246795    5115 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:37:19.246835    5115 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:37:19.257427    5115 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:37:19.257435    5115 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:37:19.257482    5115 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:37:19.260586    5115 ssh_runner.go:195] Run: which lz4
	I0729 16:37:19.261846    5115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 16:37:19.263004    5115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 16:37:19.263013    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 16:37:20.213454    5115 docker.go:649] duration metric: took 951.670292ms to copy over tarball
	I0729 16:37:20.213513    5115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 16:37:19.337267    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:19.337367    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:19.349769    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:19.349841    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:19.363307    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:19.363383    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:19.375312    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:19.375412    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:19.387070    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:19.387145    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:19.399202    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:19.399276    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:19.411738    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:19.411810    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:19.422655    4979 logs.go:276] 0 containers: []
	W0729 16:37:19.422667    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:19.422734    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:19.433954    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:19.433973    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:19.433978    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:19.472405    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:19.472420    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:19.489620    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:19.489632    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:19.502655    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:19.502666    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:19.516125    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:19.516137    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:19.529267    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:19.529279    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:19.544773    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:19.544785    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:19.561169    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:19.561183    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:19.592594    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:19.592612    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:19.616088    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:19.616103    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:19.623357    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:19.623370    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:19.661819    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:19.661836    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:19.683536    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:19.683549    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:19.697794    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:19.697810    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:19.723690    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:19.723720    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:19.747500    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:19.747512    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:19.763117    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:19.763132    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:22.289342    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:21.379435    5115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.165943416s)
	I0729 16:37:21.379448    5115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 16:37:21.395189    5115 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:37:21.398565    5115 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 16:37:21.403613    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:21.467249    5115 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:37:23.146588    5115 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.679369834s)
	I0729 16:37:23.146679    5115 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:37:23.158595    5115 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:37:23.158603    5115 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:37:23.158609    5115 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 16:37:23.162726    5115 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:37:23.164297    5115 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:37:23.166107    5115 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:37:23.166182    5115 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:37:23.168012    5115 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:37:23.168020    5115 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:37:23.169321    5115 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:37:23.169470    5115 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:37:23.170976    5115 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:37:23.171003    5115 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:37:23.172078    5115 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 16:37:23.172151    5115 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:37:23.172980    5115 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:37:23.173041    5115 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:37:23.174653    5115 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 16:37:23.174653    5115 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:37:23.582186    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:37:23.594042    5115 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 16:37:23.594066    5115 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:37:23.594110    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:37:23.599678    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:37:23.601579    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:37:23.604219    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 16:37:23.611940    5115 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 16:37:23.611961    5115 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:37:23.612011    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:37:23.615224    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:37:23.617608    5115 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 16:37:23.617631    5115 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:37:23.617665    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:37:23.623544    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0729 16:37:23.624300    5115 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 16:37:23.624436    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:37:23.633117    5115 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 16:37:23.633140    5115 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:37:23.633202    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:37:23.633239    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 16:37:23.642478    5115 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 16:37:23.642498    5115 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:37:23.642551    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:37:23.643512    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 16:37:23.652924    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 16:37:23.653033    5115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:37:23.654759    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 16:37:23.655042    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 16:37:23.655464    5115 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 16:37:23.655474    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 16:37:23.677729    5115 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 16:37:23.677737    5115 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 16:37:23.677749    5115 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:37:23.677756    5115 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 16:37:23.677802    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 16:37:23.677802    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 16:37:23.715483    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 16:37:23.715494    5115 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:37:23.715510    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 16:37:23.715489    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 16:37:23.715604    5115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 16:37:23.753264    5115 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 16:37:23.753299    5115 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 16:37:23.753324    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 16:37:23.761201    5115 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 16:37:23.761210    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 16:37:23.791931    5115 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0729 16:37:23.826628    5115 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 16:37:23.826749    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:37:23.837592    5115 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 16:37:23.837612    5115 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:37:23.837664    5115 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:37:23.851710    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 16:37:23.851826    5115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:37:23.853401    5115 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 16:37:23.853418    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 16:37:23.879821    5115 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:37:23.879834    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 16:37:24.111545    5115 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 16:37:24.111583    5115 cache_images.go:92] duration metric: took 952.995833ms to LoadCachedImages
	W0729 16:37:24.111626    5115 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 16:37:24.111630    5115 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 16:37:24.111684    5115 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-170000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 16:37:24.111742    5115 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 16:37:24.125044    5115 cni.go:84] Creating CNI manager for ""
	I0729 16:37:24.125055    5115 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:37:24.125059    5115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 16:37:24.125067    5115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-170000 NodeName:stopped-upgrade-170000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 16:37:24.125126    5115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-170000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 16:37:24.125729    5115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 16:37:24.128505    5115 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 16:37:24.128530    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 16:37:24.131188    5115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 16:37:24.135679    5115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 16:37:24.140342    5115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 16:37:24.145876    5115 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 16:37:24.147414    5115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:37:24.152059    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:24.213385    5115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:37:24.219426    5115 certs.go:68] Setting up /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000 for IP: 10.0.2.15
	I0729 16:37:24.219434    5115 certs.go:194] generating shared ca certs ...
	I0729 16:37:24.219442    5115 certs.go:226] acquiring lock for ca certs: {Name:mk96bd81121b57115fda9376f192a645eb60e2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:37:24.219613    5115 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.key
	I0729 16:37:24.219678    5115 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/proxy-client-ca.key
	I0729 16:37:24.219686    5115 certs.go:256] generating profile certs ...
	I0729 16:37:24.219760    5115 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/client.key
	I0729 16:37:24.219786    5115 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07
	I0729 16:37:24.219799    5115 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 16:37:24.362374    5115 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 ...
	I0729 16:37:24.362389    5115 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07: {Name:mk819e52ffaeecb246d86958415d95ac02b9c779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:37:24.362780    5115 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07 ...
	I0729 16:37:24.362789    5115 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07: {Name:mk821641a7dc4277bb039a7049a4ea3656f9a023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:37:24.362939    5115 certs.go:381] copying /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt
	I0729 16:37:24.363103    5115 certs.go:385] copying /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key
	I0729 16:37:24.363263    5115 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/proxy-client.key
	I0729 16:37:24.363406    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/1714.pem (1338 bytes)
	W0729 16:37:24.363438    5115 certs.go:480] ignoring /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/1714_empty.pem, impossibly tiny 0 bytes
	I0729 16:37:24.363444    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 16:37:24.363464    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem (1082 bytes)
	I0729 16:37:24.363488    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem (1123 bytes)
	I0729 16:37:24.363532    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/key.pem (1675 bytes)
	I0729 16:37:24.363592    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem (1708 bytes)
	I0729 16:37:24.363915    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 16:37:24.371093    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 16:37:24.378463    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 16:37:24.385152    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 16:37:24.392032    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 16:37:24.399566    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 16:37:24.406702    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 16:37:24.413451    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 16:37:24.420003    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem --> /usr/share/ca-certificates/17142.pem (1708 bytes)
	I0729 16:37:24.427216    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 16:37:24.434179    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/1714.pem --> /usr/share/ca-certificates/1714.pem (1338 bytes)
	I0729 16:37:24.440963    5115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 16:37:24.445855    5115 ssh_runner.go:195] Run: openssl version
	I0729 16:37:24.447668    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17142.pem && ln -fs /usr/share/ca-certificates/17142.pem /etc/ssl/certs/17142.pem"
	I0729 16:37:24.450824    5115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17142.pem
	I0729 16:37:24.452194    5115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 22:54 /usr/share/ca-certificates/17142.pem
	I0729 16:37:24.452211    5115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17142.pem
	I0729 16:37:24.453934    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17142.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 16:37:24.456610    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 16:37:24.460126    5115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:37:24.461559    5115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:37:24.461578    5115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:37:24.463237    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 16:37:24.466067    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1714.pem && ln -fs /usr/share/ca-certificates/1714.pem /etc/ssl/certs/1714.pem"
	I0729 16:37:24.468760    5115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1714.pem
	I0729 16:37:24.470116    5115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 22:54 /usr/share/ca-certificates/1714.pem
	I0729 16:37:24.470137    5115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1714.pem
	I0729 16:37:24.471789    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1714.pem /etc/ssl/certs/51391683.0"
	I0729 16:37:24.474923    5115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 16:37:24.476336    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 16:37:24.478084    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 16:37:24.479869    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 16:37:24.481932    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 16:37:24.483739    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 16:37:24.485411    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 16:37:24.487111    5115 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:37:24.487178    5115 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:37:24.497102    5115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 16:37:24.500324    5115 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 16:37:24.500330    5115 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 16:37:24.500353    5115 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 16:37:24.504243    5115 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:37:24.504556    5115 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-170000" does not appear in /Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:37:24.504651    5115 kubeconfig.go:62] /Users/jenkins/minikube-integration/19348-1218/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-170000" cluster setting kubeconfig missing "stopped-upgrade-170000" context setting]
	I0729 16:37:24.504858    5115 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/kubeconfig: {Name:mkadb977bd50641dea3f6c522a66ad62f461af12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:37:24.505296    5115 kapi.go:59] client config for stopped-upgrade-170000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/client.key", CAFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102460080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:37:24.505617    5115 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 16:37:24.508424    5115 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-170000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 16:37:24.508429    5115 kubeadm.go:1160] stopping kube-system containers ...
	I0729 16:37:24.508467    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:37:24.519202    5115 docker.go:483] Stopping containers: [992689aa0398 f5562f98bfc0 ae839b7e08bd a6704f01ea0d 713ebdc98434 bc11e1c032a5 8b58cefd71ff 0de0e91e43bd 23ae4cb25902]
	I0729 16:37:24.519268    5115 ssh_runner.go:195] Run: docker stop 992689aa0398 f5562f98bfc0 ae839b7e08bd a6704f01ea0d 713ebdc98434 bc11e1c032a5 8b58cefd71ff 0de0e91e43bd 23ae4cb25902
	I0729 16:37:24.529760    5115 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 16:37:24.535526    5115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:37:24.538685    5115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:37:24.538694    5115 kubeadm.go:157] found existing configuration files:
	
	I0729 16:37:24.538714    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0729 16:37:24.541445    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:37:24.541469    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:37:24.544143    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0729 16:37:24.547128    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:37:24.547149    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:37:24.550015    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0729 16:37:24.552531    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:37:24.552554    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:37:24.555694    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0729 16:37:24.558664    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:37:24.558683    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:37:24.561308    5115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:37:24.564193    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:37:24.586449    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:37:24.960404    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:37:25.069989    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:37:25.100808    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:37:25.127040    5115 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:37:25.127117    5115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:37:25.629300    5115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:37:27.290624    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:27.290905    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:27.316872    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:27.317013    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:27.334922    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:27.335011    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:27.348440    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:27.348515    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:27.359634    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:27.359705    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:27.370591    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:27.370657    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:27.381463    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:27.381530    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:27.391785    4979 logs.go:276] 0 containers: []
	W0729 16:37:27.391801    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:27.391861    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:27.401626    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:27.401656    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:27.401662    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:27.415918    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:27.415929    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:27.433946    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:27.433957    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:27.445506    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:27.445520    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:27.457129    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:27.457139    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:27.475066    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:27.475075    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:27.489409    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:27.489419    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:27.504212    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:27.504222    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:27.521282    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:27.521294    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:27.532598    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:27.532612    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:27.570055    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:27.570066    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:27.606580    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:27.606593    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:27.625278    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:27.625294    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:27.648141    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:27.648147    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:27.659480    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:27.659495    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:27.663815    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:27.663823    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:27.682882    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:27.682892    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:26.128983    5115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:37:26.133081    5115 api_server.go:72] duration metric: took 1.006073667s to wait for apiserver process to appear ...
	I0729 16:37:26.133090    5115 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:37:26.133099    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:30.206891    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:31.134385    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:31.134455    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:35.209039    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:35.209177    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:35.222921    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:35.223011    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:35.235144    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:35.235220    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:35.246366    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:35.246437    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:35.257334    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:35.257406    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:35.267993    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:35.268061    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:35.278514    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:35.278581    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:35.288813    4979 logs.go:276] 0 containers: []
	W0729 16:37:35.288829    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:35.288888    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:35.300047    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:35.300065    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:35.300070    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:35.311432    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:35.311444    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:35.328536    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:35.328547    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:35.340012    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:35.340022    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:35.351888    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:35.351902    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:35.356025    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:35.356033    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:35.391163    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:35.391174    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:35.411630    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:35.411640    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:35.426834    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:35.426845    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:35.438464    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:35.438473    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:35.476783    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:35.476792    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:35.490395    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:35.490405    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:35.511060    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:35.511070    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:35.533263    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:35.533271    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:35.547147    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:35.547163    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:35.558486    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:35.558498    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:35.571356    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:35.571370    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:38.090078    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:36.134892    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:36.134943    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:43.092636    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:43.092811    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:43.107530    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:43.107607    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:43.119293    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:43.119365    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:43.129484    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:43.129544    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:43.140198    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:43.140266    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:43.151328    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:43.151399    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:43.161872    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:43.161939    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:43.172591    4979 logs.go:276] 0 containers: []
	W0729 16:37:43.172604    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:43.172662    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:43.182730    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:43.182748    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:43.182754    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:43.194332    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:43.194342    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:43.212394    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:43.212403    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:43.223337    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:43.223349    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:43.241096    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:43.241107    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:43.257460    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:43.257471    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:43.268938    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:43.268949    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:43.305280    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:43.305292    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:43.345071    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:43.345085    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:43.359097    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:43.359110    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:43.371493    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:43.371510    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:43.393848    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:43.393855    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:43.398136    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:43.398145    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:43.413324    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:43.413333    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:43.433233    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:43.433251    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:43.448877    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:43.448890    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:43.461202    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:43.461214    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:41.135208    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:41.135256    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:45.975500    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:46.135553    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:46.135577    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:50.977619    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:50.977835    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:37:50.991504    4979 logs.go:276] 2 containers: [cba04b708df7 2fc9c34a112c]
	I0729 16:37:50.991587    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:37:51.003201    4979 logs.go:276] 2 containers: [f70d307de2d8 16be0b768ead]
	I0729 16:37:51.003266    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:37:51.013507    4979 logs.go:276] 1 containers: [d1dad7bcead6]
	I0729 16:37:51.013577    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:37:51.024489    4979 logs.go:276] 2 containers: [08c99aeedb5e a4c6fb95565a]
	I0729 16:37:51.024565    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:37:51.034962    4979 logs.go:276] 1 containers: [00c7df57971c]
	I0729 16:37:51.035037    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:37:51.045077    4979 logs.go:276] 2 containers: [f87d4ac64fea 7cc104746214]
	I0729 16:37:51.045140    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:37:51.055265    4979 logs.go:276] 0 containers: []
	W0729 16:37:51.055277    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:37:51.055335    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:37:51.067016    4979 logs.go:276] 2 containers: [ae74b870c9f1 66ff49475b7f]
	I0729 16:37:51.067033    4979 logs.go:123] Gathering logs for storage-provisioner [66ff49475b7f] ...
	I0729 16:37:51.067038    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ff49475b7f"
	I0729 16:37:51.079318    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:37:51.079331    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:37:51.083685    4979 logs.go:123] Gathering logs for etcd [16be0b768ead] ...
	I0729 16:37:51.083693    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16be0b768ead"
	I0729 16:37:51.100867    4979 logs.go:123] Gathering logs for kube-controller-manager [f87d4ac64fea] ...
	I0729 16:37:51.100879    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87d4ac64fea"
	I0729 16:37:51.119100    4979 logs.go:123] Gathering logs for kube-controller-manager [7cc104746214] ...
	I0729 16:37:51.119112    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc104746214"
	I0729 16:37:51.132973    4979 logs.go:123] Gathering logs for kube-scheduler [a4c6fb95565a] ...
	I0729 16:37:51.132984    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4c6fb95565a"
	I0729 16:37:51.147975    4979 logs.go:123] Gathering logs for kube-proxy [00c7df57971c] ...
	I0729 16:37:51.147986    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00c7df57971c"
	I0729 16:37:51.159320    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:37:51.159331    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:37:51.182273    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:37:51.182279    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:37:51.216264    4979 logs.go:123] Gathering logs for kube-apiserver [cba04b708df7] ...
	I0729 16:37:51.216275    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cba04b708df7"
	I0729 16:37:51.230374    4979 logs.go:123] Gathering logs for etcd [f70d307de2d8] ...
	I0729 16:37:51.230386    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f70d307de2d8"
	I0729 16:37:51.245324    4979 logs.go:123] Gathering logs for coredns [d1dad7bcead6] ...
	I0729 16:37:51.245337    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dad7bcead6"
	I0729 16:37:51.256610    4979 logs.go:123] Gathering logs for kube-scheduler [08c99aeedb5e] ...
	I0729 16:37:51.256624    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08c99aeedb5e"
	I0729 16:37:51.268136    4979 logs.go:123] Gathering logs for storage-provisioner [ae74b870c9f1] ...
	I0729 16:37:51.268147    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae74b870c9f1"
	I0729 16:37:51.279918    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:37:51.279929    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:37:51.292380    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:37:51.292391    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:37:51.328828    4979 logs.go:123] Gathering logs for kube-apiserver [2fc9c34a112c] ...
	I0729 16:37:51.328841    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fc9c34a112c"
	I0729 16:37:53.854924    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:51.135862    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:51.135875    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:58.857177    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:58.857265    4979 kubeadm.go:597] duration metric: took 4m4.470128125s to restartPrimaryControlPlane
	W0729 16:37:58.857346    4979 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 16:37:58.857384    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 16:37:59.854257    4979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:37:59.859202    4979 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:37:59.861972    4979 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:37:59.864594    4979 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:37:59.864600    4979 kubeadm.go:157] found existing configuration files:
	
	I0729 16:37:59.864625    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/admin.conf
	I0729 16:37:59.867852    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:37:59.867875    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:37:59.870928    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/kubelet.conf
	I0729 16:37:59.873317    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:37:59.873341    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:37:59.876373    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/controller-manager.conf
	I0729 16:37:59.879478    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:37:59.879504    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:37:59.882451    4979 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/scheduler.conf
	I0729 16:37:59.884973    4979 kubeadm.go:163] "https://control-plane.minikube.internal:50279" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50279 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:37:59.884996    4979 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:37:59.888105    4979 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 16:37:59.904811    4979 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 16:37:59.904841    4979 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 16:37:59.955984    4979 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 16:37:59.956154    4979 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 16:37:59.956213    4979 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 16:38:00.005951    4979 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 16:38:00.009227    4979 out.go:204]   - Generating certificates and keys ...
	I0729 16:38:00.009260    4979 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 16:38:00.009293    4979 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 16:38:00.009342    4979 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 16:38:00.009375    4979 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 16:38:00.009417    4979 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 16:38:00.009448    4979 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 16:38:00.009484    4979 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 16:38:00.009524    4979 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 16:38:00.009567    4979 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 16:38:00.009605    4979 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 16:38:00.009626    4979 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 16:38:00.009652    4979 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 16:38:00.062254    4979 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 16:38:00.169428    4979 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 16:38:00.227460    4979 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 16:38:00.297978    4979 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 16:38:00.327345    4979 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 16:38:00.327650    4979 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 16:38:00.327677    4979 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 16:38:00.400552    4979 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 16:37:56.136331    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:56.136381    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:00.404621    4979 out.go:204]   - Booting up control plane ...
	I0729 16:38:00.404669    4979 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 16:38:00.404711    4979 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 16:38:00.404745    4979 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 16:38:00.404794    4979 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 16:38:00.404884    4979 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 16:38:04.905563    4979 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502184 seconds
	I0729 16:38:04.905637    4979 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 16:38:04.909556    4979 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 16:38:05.428594    4979 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 16:38:05.428950    4979 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-896000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 16:38:05.931862    4979 kubeadm.go:310] [bootstrap-token] Using token: tpzil2.duhzl0ieoas6xbte
	I0729 16:38:01.137159    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:01.137181    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:05.938159    4979 out.go:204]   - Configuring RBAC rules ...
	I0729 16:38:05.938236    4979 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 16:38:05.938302    4979 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 16:38:05.944662    4979 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 16:38:05.945574    4979 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 16:38:05.946406    4979 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 16:38:05.947310    4979 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 16:38:05.951017    4979 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 16:38:06.111149    4979 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 16:38:06.335766    4979 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 16:38:06.336245    4979 kubeadm.go:310] 
	I0729 16:38:06.336349    4979 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 16:38:06.336388    4979 kubeadm.go:310] 
	I0729 16:38:06.336432    4979 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 16:38:06.336437    4979 kubeadm.go:310] 
	I0729 16:38:06.336449    4979 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 16:38:06.336477    4979 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 16:38:06.336504    4979 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 16:38:06.336524    4979 kubeadm.go:310] 
	I0729 16:38:06.336659    4979 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 16:38:06.336673    4979 kubeadm.go:310] 
	I0729 16:38:06.336721    4979 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 16:38:06.336738    4979 kubeadm.go:310] 
	I0729 16:38:06.336805    4979 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 16:38:06.336850    4979 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 16:38:06.336890    4979 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 16:38:06.336893    4979 kubeadm.go:310] 
	I0729 16:38:06.336933    4979 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 16:38:06.336970    4979 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 16:38:06.336973    4979 kubeadm.go:310] 
	I0729 16:38:06.337021    4979 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tpzil2.duhzl0ieoas6xbte \
	I0729 16:38:06.337075    4979 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9cecc1c3dd985258772234c33c785f9bcad6eff884cc7ff19b79a518c1cf4e1 \
	I0729 16:38:06.337092    4979 kubeadm.go:310] 	--control-plane 
	I0729 16:38:06.337097    4979 kubeadm.go:310] 
	I0729 16:38:06.337149    4979 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 16:38:06.337158    4979 kubeadm.go:310] 
	I0729 16:38:06.337197    4979 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tpzil2.duhzl0ieoas6xbte \
	I0729 16:38:06.337255    4979 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9cecc1c3dd985258772234c33c785f9bcad6eff884cc7ff19b79a518c1cf4e1 
	I0729 16:38:06.337338    4979 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 16:38:06.337345    4979 cni.go:84] Creating CNI manager for ""
	I0729 16:38:06.337353    4979 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:38:06.343715    4979 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:38:06.351683    4979 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:38:06.354743    4979 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:38:06.359550    4979 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:38:06.359617    4979 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:38:06.359627    4979 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-896000 minikube.k8s.io/updated_at=2024_07_29T16_38_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9 minikube.k8s.io/name=running-upgrade-896000 minikube.k8s.io/primary=true
	I0729 16:38:06.402068    4979 ops.go:34] apiserver oom_adj: -16
	I0729 16:38:06.402083    4979 kubeadm.go:1113] duration metric: took 42.501917ms to wait for elevateKubeSystemPrivileges
	I0729 16:38:06.402123    4979 kubeadm.go:394] duration metric: took 4m12.0296915s to StartCluster
	I0729 16:38:06.402137    4979 settings.go:142] acquiring lock: {Name:mk1df9c174f764d47de5a2c25ea0f0fc28c1d98c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:38:06.402229    4979 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:38:06.402598    4979 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/kubeconfig: {Name:mkadb977bd50641dea3f6c522a66ad62f461af12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:38:06.402820    4979 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:38:06.402830    4979 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 16:38:06.402868    4979 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-896000"
	I0729 16:38:06.402881    4979 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-896000"
	I0729 16:38:06.402880    4979 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-896000"
	W0729 16:38:06.402886    4979 addons.go:243] addon storage-provisioner should already be in state true
	I0729 16:38:06.402893    4979 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-896000"
	I0729 16:38:06.402898    4979 host.go:66] Checking if "running-upgrade-896000" exists ...
	I0729 16:38:06.402940    4979 config.go:182] Loaded profile config "running-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:38:06.403771    4979 kapi.go:59] client config for running-upgrade-896000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/running-upgrade-896000/client.key", CAFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101bc8080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:38:06.403891    4979 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-896000"
	W0729 16:38:06.403895    4979 addons.go:243] addon default-storageclass should already be in state true
	I0729 16:38:06.403902    4979 host.go:66] Checking if "running-upgrade-896000" exists ...
	I0729 16:38:06.406707    4979 out.go:177] * Verifying Kubernetes components...
	I0729 16:38:06.407007    4979 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:38:06.410909    4979 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:38:06.410915    4979 sshutil.go:53] new ssh client: &{IP:localhost Port:50247 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/running-upgrade-896000/id_rsa Username:docker}
	I0729 16:38:06.413629    4979 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:38:06.417651    4979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:38:06.421547    4979 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:38:06.421557    4979 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:38:06.421564    4979 sshutil.go:53] new ssh client: &{IP:localhost Port:50247 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/running-upgrade-896000/id_rsa Username:docker}
	I0729 16:38:06.492494    4979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:38:06.497848    4979 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:38:06.497883    4979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:38:06.501744    4979 api_server.go:72] duration metric: took 98.915458ms to wait for apiserver process to appear ...
	I0729 16:38:06.501751    4979 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:38:06.501758    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:06.530223    4979 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:38:06.545836    4979 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:38:06.138005    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:06.138025    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:11.503765    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:11.503792    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:11.138538    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:11.138595    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:16.503940    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:16.503964    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:16.140123    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:16.140162    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:21.504274    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:21.504294    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:21.142119    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:21.142139    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:26.504575    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:26.504600    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:26.144271    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:26.144620    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:38:26.176685    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:38:26.176878    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:38:26.203741    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:38:26.203843    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:38:26.217664    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:38:26.217739    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:38:26.229631    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:38:26.229702    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:38:26.241155    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:38:26.241239    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:38:26.252215    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:38:26.252284    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:38:26.264451    5115 logs.go:276] 0 containers: []
	W0729 16:38:26.264466    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:38:26.264540    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:38:26.275781    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:38:26.275803    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:38:26.275808    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:38:26.293608    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:38:26.293619    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:38:26.307734    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:38:26.307748    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:38:26.319761    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:38:26.319777    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:38:26.331473    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:38:26.331484    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:38:26.369517    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:38:26.369527    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:38:26.381180    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:38:26.381201    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:38:26.398830    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:38:26.398841    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:38:26.412518    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:38:26.412529    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:38:26.423766    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:38:26.423778    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:38:26.448726    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:38:26.448734    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:38:26.524298    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:38:26.524310    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:38:26.536468    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:38:26.536482    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:38:26.558822    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:38:26.558839    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:38:26.570504    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:38:26.570516    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:38:26.574663    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:38:26.574673    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:38:26.615126    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:38:26.615137    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:38:29.131890    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:31.505071    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:31.505090    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:34.133165    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:34.133319    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:38:34.144557    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:38:34.144636    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:38:34.155666    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:38:34.155746    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:38:34.166233    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:38:34.166320    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:38:34.176744    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:38:34.176817    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:38:34.195388    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:38:34.195477    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:38:34.206195    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:38:34.206270    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:38:34.216466    5115 logs.go:276] 0 containers: []
	W0729 16:38:34.216477    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:38:34.216533    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:38:34.227712    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:38:34.227734    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:38:34.227739    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:38:34.232409    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:38:34.232417    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:38:34.269224    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:38:34.269236    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:38:34.283789    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:38:34.283800    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:38:34.301175    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:38:34.301186    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:38:34.337406    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:38:34.337415    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:38:34.375606    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:38:34.375621    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:38:34.400116    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:38:34.400127    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:38:34.417721    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:38:34.417731    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:38:34.430195    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:38:34.430206    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:38:34.442186    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:38:34.442202    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:38:34.456173    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:38:34.456184    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:38:34.468655    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:38:34.468667    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:38:34.481717    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:38:34.481732    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:38:34.493262    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:38:34.493286    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:38:34.504984    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:38:34.504994    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:38:34.530443    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:38:34.530452    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:38:36.505663    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:36.505685    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 16:38:36.873923    4979 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 16:38:36.878517    4979 out.go:177] * Enabled addons: storage-provisioner
	I0729 16:38:36.892447    4979 addons.go:510] duration metric: took 30.490543083s for enable addons: enabled=[storage-provisioner]
	I0729 16:38:37.043933    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:41.506428    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:41.506475    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:42.046075    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:42.046317    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:38:42.072499    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:38:42.072603    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:38:42.087670    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:38:42.087754    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:38:42.100092    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:38:42.100163    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:38:42.112639    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:38:42.112707    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:38:42.122782    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:38:42.122848    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:38:42.133370    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:38:42.133441    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:38:42.143430    5115 logs.go:276] 0 containers: []
	W0729 16:38:42.143441    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:38:42.143503    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:38:42.153324    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:38:42.153342    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:38:42.153347    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:38:42.197220    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:38:42.197236    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:38:42.212039    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:38:42.212050    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:38:42.223558    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:38:42.223571    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:38:42.238341    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:38:42.238352    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:38:42.256235    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:38:42.256246    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:38:42.281591    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:38:42.281599    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:38:42.295447    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:38:42.295459    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:38:42.330944    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:38:42.330957    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:38:42.346603    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:38:42.346619    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:38:42.369326    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:38:42.369340    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:38:42.374013    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:38:42.374021    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:38:42.386062    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:38:42.386078    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:38:42.397934    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:38:42.397945    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:38:42.409267    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:38:42.409278    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:38:42.446605    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:38:42.446620    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:38:42.460508    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:38:42.460524    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:38:44.974301    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:46.507594    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:46.507638    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:49.976485    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:49.976722    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:38:49.992957    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:38:49.993039    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:38:50.006217    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:38:50.006290    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:38:50.017128    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:38:50.017197    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:38:50.027712    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:38:50.027789    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:38:50.038278    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:38:50.038355    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:38:50.055775    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:38:50.055844    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:38:50.066064    5115 logs.go:276] 0 containers: []
	W0729 16:38:50.066074    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:38:50.066133    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:38:50.076859    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:38:50.076876    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:38:50.076881    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:38:50.088261    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:38:50.088272    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:38:50.112686    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:38:50.112698    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:38:50.151915    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:38:50.151927    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:38:50.167001    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:38:50.167014    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:38:50.186118    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:38:50.186128    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:38:50.198618    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:38:50.198630    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:38:50.215397    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:38:50.215408    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:38:50.254031    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:38:50.254045    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:38:50.291609    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:38:50.291622    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:38:50.312399    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:38:50.312410    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:38:50.330585    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:38:50.330595    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:38:50.341993    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:38:50.342008    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:38:50.353532    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:38:50.353543    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:38:50.358036    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:38:50.358044    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:38:50.372035    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:38:50.372046    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:38:50.383684    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:38:50.383695    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:38:51.508991    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:51.509018    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:52.897492    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:56.510737    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:56.510758    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:57.899667    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:57.899898    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:38:57.918870    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:38:57.918971    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:38:57.934602    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:38:57.934682    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:38:57.946145    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:38:57.946216    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:38:57.956937    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:38:57.957007    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:38:57.967605    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:38:57.967669    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:38:57.978704    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:38:57.978775    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:38:57.988953    5115 logs.go:276] 0 containers: []
	W0729 16:38:57.988968    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:38:57.989030    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:38:57.999774    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:38:57.999795    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:38:57.999801    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:38:58.035184    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:38:58.035200    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:38:58.048422    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:38:58.048434    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:38:58.072742    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:38:58.072753    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:38:58.087652    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:38:58.087667    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:38:58.104936    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:38:58.104947    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:38:58.116807    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:38:58.116818    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:38:58.155110    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:38:58.155117    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:38:58.159218    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:38:58.159227    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:38:58.173179    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:38:58.173191    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:38:58.211026    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:38:58.211040    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:38:58.225288    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:38:58.225298    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:38:58.237179    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:38:58.237191    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:38:58.248984    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:38:58.248995    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:38:58.270266    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:38:58.270278    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:38:58.281580    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:38:58.281596    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:38:58.293004    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:38:58.293016    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:00.806910    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:01.512798    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:01.512839    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:05.809052    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:05.809260    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:05.830099    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:05.830203    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:05.844675    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:05.844749    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:05.857216    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:05.857285    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:05.867596    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:05.867666    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:05.883078    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:05.883164    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:05.899077    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:05.899151    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:05.908942    5115 logs.go:276] 0 containers: []
	W0729 16:39:05.908953    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:05.909003    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:05.927787    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:05.927805    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:05.927810    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:05.962398    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:05.962413    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:05.973794    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:05.973805    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:05.998168    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:05.998178    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:06.012613    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:06.012625    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:06.024503    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:06.024514    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:06.044924    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:06.044936    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:06.080879    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:06.080887    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:06.514913    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:06.514996    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:06.525687    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:06.525757    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:06.537536    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:06.537606    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:06.548097    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:06.548168    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:06.558459    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:06.558519    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:06.576234    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:06.576303    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:06.586496    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:06.586562    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:06.600267    4979 logs.go:276] 0 containers: []
	W0729 16:39:06.600282    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:06.600336    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:06.611069    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:06.611084    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:06.611089    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:06.622794    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:06.622805    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:06.634963    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:06.634974    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:06.646316    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:06.646328    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:06.681409    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:06.681417    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:06.686080    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:06.686090    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:06.720949    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:06.720961    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:06.735996    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:06.736007    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:06.753135    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:06.753145    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:06.764952    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:06.764963    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:06.790218    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:06.790229    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:06.804132    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:06.804146    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:06.817555    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:06.817568    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:06.119914    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:06.119926    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:06.141858    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:06.141870    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:06.154215    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:06.154227    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:06.176319    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:06.176331    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:06.190912    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:06.190923    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:06.206515    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:06.206527    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:06.211250    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:06.211257    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:06.225212    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:06.225223    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:06.238926    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:06.238937    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:08.752848    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:09.331077    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:13.755055    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:13.755175    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:13.768843    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:13.768921    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:13.780359    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:13.780439    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:13.791259    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:13.791327    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:13.801980    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:13.802044    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:13.812435    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:13.812496    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:13.823135    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:13.823203    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:13.833874    5115 logs.go:276] 0 containers: []
	W0729 16:39:13.833889    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:13.833946    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:13.845544    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:13.845562    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:13.845567    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:13.886101    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:13.886114    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:13.901991    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:13.902002    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:13.914086    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:13.914097    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:13.926319    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:13.926331    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:13.945137    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:13.945154    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:13.957522    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:13.957532    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:13.994052    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:13.994063    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:13.998691    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:13.998698    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:14.041211    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:14.041221    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:14.055046    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:14.055056    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:14.066362    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:14.066374    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:14.083506    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:14.083517    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:14.108467    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:14.108473    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:14.120463    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:14.120474    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:14.141183    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:14.141196    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:14.160980    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:14.160991    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:14.333224    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:14.333379    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:14.345468    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:14.345539    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:14.355945    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:14.356016    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:14.366777    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:14.366845    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:14.377054    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:14.377116    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:14.387468    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:14.387532    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:14.400575    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:14.400640    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:14.410934    4979 logs.go:276] 0 containers: []
	W0729 16:39:14.410944    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:14.410993    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:14.421476    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:14.421492    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:14.421498    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:14.426432    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:14.426441    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:14.445777    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:14.445789    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:14.457984    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:14.457995    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:14.473681    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:14.473694    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:14.485063    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:14.485073    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:14.500915    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:14.500929    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:14.525747    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:14.525757    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:14.560000    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:14.560008    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:14.574992    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:14.575003    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:14.586282    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:14.586295    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:14.605232    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:14.605243    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:14.617232    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:14.617242    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:17.154888    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:16.684452    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:22.157097    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:22.157258    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:22.169485    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:22.169557    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:22.180490    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:22.180560    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:22.190850    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:22.190917    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:22.203342    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:22.203416    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:22.213591    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:22.213661    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:22.223827    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:22.223905    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:22.233879    4979 logs.go:276] 0 containers: []
	W0729 16:39:22.233891    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:22.233949    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:22.245211    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:22.245227    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:22.245233    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:22.257009    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:22.257019    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:22.294422    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:22.294434    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:22.310053    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:22.310064    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:22.324450    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:22.324465    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:22.339815    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:22.339830    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:22.357211    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:22.357226    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:22.369072    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:22.369083    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:22.394530    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:22.394538    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:22.428947    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:22.428954    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:22.433978    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:22.433984    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:22.449909    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:22.449922    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:22.461992    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:22.462005    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:21.686562    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:21.686732    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:21.704071    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:21.704163    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:21.717182    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:21.717254    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:21.728155    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:21.728220    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:21.740583    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:21.740660    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:21.754758    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:21.754838    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:21.765827    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:21.765899    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:21.776385    5115 logs.go:276] 0 containers: []
	W0729 16:39:21.776397    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:21.776457    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:21.787346    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:21.787366    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:21.787372    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:21.805096    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:21.805110    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:21.816193    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:21.816202    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:21.830468    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:21.830481    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:21.842745    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:21.842754    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:21.854373    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:21.854383    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:21.892289    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:21.892296    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:21.906383    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:21.906397    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:21.917760    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:21.917769    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:21.942660    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:21.942676    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:21.954591    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:21.954604    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:21.992153    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:21.992167    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:22.029576    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:22.029593    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:22.043764    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:22.043774    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:22.054818    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:22.054827    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:22.066463    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:22.066473    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:22.091420    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:22.091427    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:24.597255    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:24.975529    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:29.599427    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:29.599584    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:29.617489    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:29.617568    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:29.628287    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:29.628376    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:29.639202    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:29.639271    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:29.650101    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:29.650174    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:29.660968    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:29.661042    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:29.671858    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:29.671927    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:29.686562    5115 logs.go:276] 0 containers: []
	W0729 16:39:29.686580    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:29.686637    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:29.696956    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:29.696976    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:29.696981    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:29.733109    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:29.733122    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:29.768225    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:29.768238    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:29.806067    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:29.806078    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:29.822216    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:29.822230    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:29.838654    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:29.838668    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:29.850242    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:29.850254    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:29.854834    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:29.854840    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:29.866166    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:29.866178    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:29.883096    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:29.883112    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:29.894479    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:29.894492    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:29.907413    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:29.907423    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:29.921311    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:29.921326    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:29.932980    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:29.932993    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:29.954406    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:29.954416    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:29.966225    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:29.966237    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:29.990331    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:29.990341    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:29.976090    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:29.976183    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:29.986856    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:29.986927    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:29.998031    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:29.998107    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:30.013315    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:30.013388    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:30.024009    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:30.024072    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:30.034323    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:30.034396    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:30.044427    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:30.044495    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:30.054489    4979 logs.go:276] 0 containers: []
	W0729 16:39:30.054501    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:30.054560    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:30.064517    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:30.064532    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:30.064537    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:30.078663    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:30.078674    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:30.090214    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:30.090226    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:30.101765    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:30.101775    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:30.113641    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:30.113652    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:30.118090    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:30.118099    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:30.152176    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:30.152186    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:30.164905    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:30.164916    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:30.180173    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:30.180184    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:30.200720    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:30.200732    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:30.224554    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:30.224562    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:30.237473    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:30.237485    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:30.272345    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:30.272357    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:32.788206    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:32.505026    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:37.790296    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:37.790388    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:37.802176    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:37.802248    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:37.813771    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:37.813848    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:37.825320    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:37.825411    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:37.841609    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:37.841683    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:37.853345    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:37.853416    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:37.865160    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:37.865238    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:37.875951    4979 logs.go:276] 0 containers: []
	W0729 16:39:37.875964    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:37.876023    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:37.886496    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:37.886511    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:37.886517    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:37.923192    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:37.923206    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:37.928516    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:37.928526    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:37.974925    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:37.974937    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:37.990537    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:37.990548    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:38.005612    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:38.005623    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:38.022114    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:38.022125    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:38.038015    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:38.038026    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:38.055595    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:38.055606    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:38.083001    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:38.083010    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:38.095077    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:38.095089    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:38.113022    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:38.113032    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:38.138202    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:38.138213    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:37.507260    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:37.507484    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:37.534322    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:37.534452    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:37.551758    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:37.551832    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:37.565415    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:37.565487    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:37.576367    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:37.576433    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:37.586872    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:37.586939    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:37.597423    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:37.597490    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:37.610432    5115 logs.go:276] 0 containers: []
	W0729 16:39:37.610443    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:37.610495    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:37.621320    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:37.621340    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:37.621346    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:37.635464    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:37.635476    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:37.647142    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:37.647153    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:37.672143    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:37.672150    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:37.688902    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:37.688913    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:37.706253    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:37.706265    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:37.727669    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:37.727679    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:37.741451    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:37.741460    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:37.753485    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:37.753499    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:37.774064    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:37.774084    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:37.786205    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:37.786222    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:37.799003    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:37.799017    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:37.804040    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:37.804051    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:37.853119    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:37.853135    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:37.894072    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:37.894083    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:37.924754    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:37.924763    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:37.967491    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:37.967509    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:40.490709    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:40.651324    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:45.491097    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:45.491223    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:45.503194    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:45.503278    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:45.513917    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:45.513997    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:45.524841    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:45.524906    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:45.534975    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:45.535046    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:45.545691    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:45.545762    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:45.556097    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:45.556162    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:45.566566    5115 logs.go:276] 0 containers: []
	W0729 16:39:45.566578    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:45.566640    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:45.576984    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:45.577000    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:45.577005    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:45.614853    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:45.614865    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:45.629190    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:45.629203    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:45.640545    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:45.640555    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:45.651457    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:45.651469    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:45.669631    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:45.669647    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:45.682787    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:45.682800    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:45.687711    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:45.687720    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:45.725127    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:45.725141    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:45.739997    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:45.740005    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:45.752721    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:45.752731    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:45.765471    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:45.765483    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:45.804563    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:45.804573    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:45.823179    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:45.823194    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:45.845671    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:45.845680    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:45.859022    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:45.859033    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:45.871823    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:45.871836    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:45.651622    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:45.651696    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:45.663367    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:45.663441    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:45.674406    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:45.674476    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:45.686353    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:45.686425    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:45.697607    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:45.697677    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:45.709287    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:45.709357    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:45.725848    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:45.725911    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:45.738640    4979 logs.go:276] 0 containers: []
	W0729 16:39:45.738652    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:45.738708    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:45.751825    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:45.751844    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:45.751849    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:45.767195    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:45.767205    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:45.779633    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:45.779646    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:45.805721    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:45.805729    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:45.842053    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:45.842066    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:45.846847    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:45.846853    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:45.859264    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:45.859272    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:45.871941    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:45.871949    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:45.889172    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:45.889183    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:45.901642    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:45.901654    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:45.919685    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:45.919699    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:45.931590    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:45.931602    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:45.966692    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:45.966707    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:48.482443    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:48.403913    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:53.484590    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:53.484673    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:53.496331    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:39:53.496406    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:53.507778    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:39:53.507849    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:53.518724    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:39:53.518803    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:53.529984    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:39:53.530055    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:53.541278    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:39:53.541352    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:53.553276    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:39:53.553347    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:53.564723    4979 logs.go:276] 0 containers: []
	W0729 16:39:53.564735    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:53.564799    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:53.576049    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:39:53.576063    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:39:53.576068    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:39:53.589019    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:39:53.589031    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:39:53.603943    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:53.603956    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:53.640019    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:39:53.640039    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:39:53.655395    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:39:53.655408    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:39:53.668753    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:39:53.668764    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:39:53.685476    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:39:53.685487    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:39:53.703711    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:39:53.703723    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:39:53.716442    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:53.716455    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:53.744717    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:39:53.744738    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:53.758759    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:53.758772    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:53.764061    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:53.764070    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:53.803360    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:39:53.803375    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:39:53.406182    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:53.406336    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:53.421683    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:53.421761    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:53.438135    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:53.438198    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:53.448543    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:53.448614    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:53.459356    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:53.459430    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:53.469868    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:53.469934    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:53.480244    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:53.480310    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:53.492442    5115 logs.go:276] 0 containers: []
	W0729 16:39:53.492456    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:53.492520    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:53.503662    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:53.503681    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:53.503687    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:53.516046    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:53.516060    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:53.530246    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:53.530256    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:53.542465    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:53.542473    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:53.582033    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:53.582046    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:53.596827    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:53.596844    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:53.608907    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:53.608921    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:53.651222    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:53.651235    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:53.691600    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:53.691615    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:53.704387    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:53.704396    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:53.732589    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:53.732607    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:53.751603    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:53.751619    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:53.777837    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:53.777855    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:53.792737    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:53.792752    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:53.809571    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:53.809587    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:53.821900    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:53.821913    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:53.825852    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:53.825858    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:56.321257    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:56.340012    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:01.323481    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:01.323601    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:01.337496    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:01.337579    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:01.350665    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:01.350737    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:01.362377    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:01.362443    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:01.373988    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:01.374055    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:01.386900    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:01.386978    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:01.399131    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:01.399203    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:01.410475    4979 logs.go:276] 0 containers: []
	W0729 16:40:01.410485    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:01.410543    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:01.421413    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:01.421430    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:01.421435    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:01.436785    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:01.436794    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:01.448782    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:01.448793    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:01.466705    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:01.466720    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:01.491771    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:01.491788    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:01.504564    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:01.504576    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:01.541392    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:01.541413    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:01.547498    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:01.547509    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:01.585090    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:01.585101    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:01.608544    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:01.608555    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:01.622496    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:01.622507    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:01.643239    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:01.643248    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:01.655688    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:01.655701    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:04.173184    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:01.340977    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:01.341184    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:01.354621    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:01.354696    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:01.367284    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:01.367354    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:01.378821    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:01.378892    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:01.390617    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:01.390689    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:01.402293    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:01.402361    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:01.414093    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:01.414157    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:01.425286    5115 logs.go:276] 0 containers: []
	W0729 16:40:01.425297    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:01.425357    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:01.436613    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:01.436630    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:01.436636    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:01.473161    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:01.473174    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:01.488459    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:01.488475    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:01.500763    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:01.500780    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:01.513499    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:01.513512    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:01.525893    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:01.525907    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:01.548335    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:01.548345    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:01.572971    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:01.572985    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:01.613624    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:01.613638    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:01.627054    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:01.627069    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:01.639532    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:01.639544    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:01.644516    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:01.644523    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:01.660327    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:01.660338    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:01.699848    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:01.699862    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:01.713827    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:01.713836    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:01.725436    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:01.725446    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:01.736979    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:01.736990    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:04.260828    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:09.174958    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:09.175286    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:09.193501    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:09.193583    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:09.207676    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:09.207746    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:09.219350    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:09.219416    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:09.229934    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:09.229999    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:09.262920    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:09.262995    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:09.274585    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:09.274656    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:09.285989    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:09.286058    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:09.297682    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:09.297756    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:09.308839    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:09.308911    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:09.319814    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:09.319887    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:09.331289    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:09.331355    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:09.342999    5115 logs.go:276] 0 containers: []
	W0729 16:40:09.343015    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:09.343078    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:09.354865    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:09.354887    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:09.354893    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:09.399967    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:09.399997    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:09.415445    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:09.415460    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:09.427166    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:09.427179    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:09.449260    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:09.449270    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:09.464159    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:09.464176    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:09.475935    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:09.475948    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:09.499895    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:09.499913    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:09.504441    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:09.504449    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:09.520858    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:09.520872    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:09.533254    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:09.533269    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:09.545070    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:09.545081    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:09.562830    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:09.562842    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:09.574220    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:09.574234    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:09.586482    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:09.586491    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:09.625101    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:09.625115    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:09.637998    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:09.638008    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:09.242644    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:09.242710    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:09.253013    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:09.253080    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:09.263860    4979 logs.go:276] 0 containers: []
	W0729 16:40:09.263870    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:09.263900    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:09.279965    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:09.279989    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:09.279996    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:09.292577    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:09.292587    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:09.310840    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:09.310850    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:09.328482    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:09.328494    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:09.365669    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:09.365681    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:09.378230    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:09.378242    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:09.390510    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:09.390522    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:09.406783    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:09.406796    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:09.432291    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:09.432304    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:09.444278    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:09.444291    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:09.481170    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:09.481180    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:09.486634    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:09.486646    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:09.508089    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:09.508099    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:12.029502    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:12.174262    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:17.031580    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:17.031872    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:17.047832    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:17.047919    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:17.060091    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:17.060171    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:17.071538    4979 logs.go:276] 2 containers: [148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:17.071610    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:17.082101    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:17.082172    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:17.092044    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:17.092120    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:17.102905    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:17.102973    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:17.112834    4979 logs.go:276] 0 containers: []
	W0729 16:40:17.112845    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:17.112903    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:17.123284    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:17.123301    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:17.123306    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:17.155836    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:17.155843    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:17.160330    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:17.160339    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:17.178748    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:17.178757    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:17.192137    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:17.192150    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:17.208760    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:17.208776    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:17.221415    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:17.221428    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:17.247331    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:17.247345    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:17.286535    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:17.286548    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:17.301347    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:17.301362    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:17.314953    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:17.314964    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:17.331960    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:17.331971    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:17.351297    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:17.351306    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:17.176341    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:17.176455    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:17.187616    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:17.187689    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:17.200941    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:17.201016    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:17.213477    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:17.213549    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:17.225174    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:17.225254    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:17.236578    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:17.236650    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:17.248031    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:17.248104    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:17.258758    5115 logs.go:276] 0 containers: []
	W0729 16:40:17.258790    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:17.258856    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:17.270438    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:17.270455    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:17.270463    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:17.311197    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:17.311212    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:17.326775    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:17.326789    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:17.349717    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:17.349731    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:17.368137    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:17.368153    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:17.379877    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:17.379889    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:17.403788    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:17.403798    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:17.416334    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:17.416346    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:17.420705    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:17.420713    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:17.432302    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:17.432314    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:17.443791    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:17.443803    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:17.457403    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:17.457415    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:17.471128    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:17.471138    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:17.508702    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:17.508714    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:17.520512    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:17.520523    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:17.532440    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:17.532451    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:17.543868    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:17.543877    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:20.082668    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:19.870313    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:25.084818    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:25.084910    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:25.096326    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:25.096409    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:25.108024    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:25.108101    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:25.119060    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:25.119124    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:25.130932    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:25.131000    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:25.142452    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:25.142533    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:25.155415    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:25.155483    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:25.166525    5115 logs.go:276] 0 containers: []
	W0729 16:40:25.166541    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:25.166606    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:25.188579    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:25.188599    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:25.188604    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:25.229985    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:25.230001    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:25.266874    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:25.266887    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:25.280029    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:25.280045    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:25.291490    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:25.291502    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:25.303570    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:25.303582    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:25.307759    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:25.307766    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:25.319743    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:25.319754    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:25.332401    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:25.332413    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:25.346733    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:25.346743    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:25.358211    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:25.358222    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:25.377196    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:25.377206    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:25.391311    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:25.391324    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:25.429993    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:25.430006    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:25.444047    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:25.444060    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:25.465269    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:25.465283    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:25.480684    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:25.480695    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:24.872582    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:24.872794    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:24.889207    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:24.889304    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:24.902110    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:24.902191    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:24.912903    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:24.912975    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:24.923933    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:24.924003    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:24.936757    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:24.936822    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:24.947146    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:24.947208    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:24.958728    4979 logs.go:276] 0 containers: []
	W0729 16:40:24.958739    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:24.958799    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:24.969239    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:24.969259    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:24.969265    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:24.984218    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:24.984229    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:24.996272    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:24.996282    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:25.007682    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:25.007695    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:25.012310    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:25.012319    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:25.029553    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:40:25.029564    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:40:25.040769    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:25.040780    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:25.065583    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:25.065594    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:25.100351    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:25.100362    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:25.117113    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:25.117127    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:25.129787    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:25.129800    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:25.180868    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:40:25.180880    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:40:25.193453    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:25.193466    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:25.206341    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:25.206352    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:25.223882    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:25.223894    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:27.746909    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:28.005198    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:32.749145    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:32.749326    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:32.762281    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:32.762358    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:32.773077    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:32.773147    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:32.783176    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:32.783249    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:32.793776    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:32.793842    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:32.803729    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:32.803795    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:32.814425    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:32.814492    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:32.828522    4979 logs.go:276] 0 containers: []
	W0729 16:40:32.828540    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:32.828591    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:32.839147    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:32.839165    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:32.839171    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:32.851247    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:32.851259    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:32.867307    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:32.867318    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:32.884095    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:32.884105    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:32.895399    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:32.895410    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:32.929175    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:32.929190    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:32.941089    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:32.941105    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:32.966361    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:32.966372    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:33.000094    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:33.000118    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:33.011805    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:40:33.011817    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:40:33.025890    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:40:33.025902    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:40:33.038606    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:33.038617    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:33.051756    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:33.051767    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:33.066778    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:33.066785    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:33.081997    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:33.082013    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:33.007316    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:33.007413    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:33.018635    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:33.018745    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:33.030101    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:33.030175    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:33.041619    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:33.041688    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:33.054328    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:33.054399    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:33.065647    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:33.065724    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:33.077161    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:33.077242    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:33.088196    5115 logs.go:276] 0 containers: []
	W0729 16:40:33.088206    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:33.088269    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:33.104787    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:33.104805    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:33.104811    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:33.125791    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:33.125802    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:33.138654    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:33.138666    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:33.162003    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:33.162015    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:33.176560    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:33.176575    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:33.195394    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:33.195406    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:33.206679    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:33.206691    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:33.217804    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:33.217816    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:33.229878    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:33.229890    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:33.241725    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:33.241737    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:33.259467    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:33.259480    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:33.295835    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:33.295844    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:33.299832    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:33.299844    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:33.335112    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:33.335123    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:33.349754    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:33.349768    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:33.387193    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:33.387204    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:33.401786    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:33.401799    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:35.915297    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:35.599815    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:40.917526    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:40.917611    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:40.930480    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:40.930559    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:40.942360    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:40.942433    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:40.955064    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:40.955135    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:40.965765    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:40.965838    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:40.976461    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:40.976524    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:40.987361    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:40.987439    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:40.997807    5115 logs.go:276] 0 containers: []
	W0729 16:40:40.997817    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:40.997872    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:41.008494    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:41.008517    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:41.008524    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:41.012767    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:41.012776    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:41.024910    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:41.024924    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:41.037212    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:41.037226    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:41.073302    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:41.073310    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:40.601972    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:40.602148    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:40.618544    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:40.618630    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:40.631560    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:40.631633    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:40.642456    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:40.642522    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:40.656805    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:40.656879    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:40.669066    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:40.669138    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:40.679701    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:40.679765    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:40.689897    4979 logs.go:276] 0 containers: []
	W0729 16:40:40.689908    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:40.689965    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:40.700424    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:40.700441    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:40.700446    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:40.704732    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:40.704741    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:40.716161    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:40.716173    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:40.731973    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:40.731984    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:40.744256    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:40.744270    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:40.755673    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:40.755684    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:40.769644    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:40.769656    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:40.786309    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:40.786320    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:40.803515    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:40.803525    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:40.814924    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:40.814933    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:40.849686    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:40:40.849700    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:40:40.861490    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:40.861501    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:40.894182    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:40.894192    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:40.908137    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:40:40.908148    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:40:40.919913    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:40.919923    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:43.448820    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:41.108387    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:41.108397    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:41.125561    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:41.125576    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:41.141326    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:41.141336    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:41.152126    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:41.152138    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:41.166131    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:41.166144    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:41.206792    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:41.206802    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:41.222334    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:41.222348    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:41.243539    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:41.243552    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:41.255693    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:41.255711    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:41.270004    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:41.270016    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:41.281368    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:41.281378    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:41.300922    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:41.300932    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:43.827164    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:48.451019    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:48.451190    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:48.463258    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:48.463329    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:48.474867    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:48.474939    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:48.485714    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:48.485794    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:48.496147    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:48.496216    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:48.505973    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:48.506048    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:48.516127    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:48.516191    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:48.526169    4979 logs.go:276] 0 containers: []
	W0729 16:40:48.526181    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:48.526233    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:48.536860    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:48.536877    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:48.536883    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:48.572941    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:40:48.572953    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:40:48.588374    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:48.588385    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:48.600753    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:48.600764    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:48.617878    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:48.617888    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:48.629495    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:48.629504    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:48.641608    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:48.641623    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:48.646540    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:48.646548    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:48.660829    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:40:48.660841    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:40:48.672361    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:48.672372    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:48.705336    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:48.705342    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:48.716428    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:48.716440    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:48.732131    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:48.732142    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:48.752824    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:48.752840    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:48.764484    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:48.764495    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:48.829273    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:48.829355    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:48.840273    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:48.840342    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:48.850949    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:48.851015    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:48.864953    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:48.865022    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:48.875955    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:48.876030    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:48.888644    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:48.888713    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:48.899078    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:48.899152    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:48.909450    5115 logs.go:276] 0 containers: []
	W0729 16:40:48.909463    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:48.909527    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:48.920089    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:48.920108    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:48.920114    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:48.934012    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:48.934022    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:48.946323    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:48.946333    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:48.958841    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:48.958853    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:48.962995    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:48.963003    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:49.001047    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:49.001058    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:49.013405    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:49.013418    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:49.036916    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:49.036928    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:49.048215    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:49.048227    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:49.071521    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:49.071532    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:49.085223    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:49.085232    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:49.120020    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:49.120031    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:49.134551    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:49.134561    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:49.147844    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:49.147855    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:49.185368    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:49.185377    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:49.198217    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:49.198226    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:49.216104    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:49.216116    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:51.290089    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:51.742159    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:56.292110    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:56.292275    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:56.306187    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:40:56.306269    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:56.318445    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:40:56.318523    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:56.329743    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:40:56.329816    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:56.339942    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:40:56.340014    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:56.354305    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:40:56.354377    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:56.364543    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:40:56.364609    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:56.374660    4979 logs.go:276] 0 containers: []
	W0729 16:40:56.374672    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:56.374727    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:56.384933    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:40:56.384948    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:56.384953    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:56.422998    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:40:56.423014    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:40:56.442232    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:40:56.442245    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:40:56.454702    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:40:56.454716    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:40:56.466796    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:56.466806    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:56.491700    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:40:56.491707    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:56.505150    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:40:56.505164    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:40:56.517237    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:40:56.517253    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:40:56.532803    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:40:56.532817    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:40:56.550087    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:56.550101    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:56.582983    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:40:56.582990    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:40:56.598683    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:40:56.598695    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:40:56.610325    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:40:56.610336    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:40:56.622000    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:56.622014    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:56.627176    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:40:56.627184    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:40:59.141106    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:56.744336    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:56.744421    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:56.755092    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:56.755162    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:56.765314    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:56.765382    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:56.775801    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:56.775876    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:56.786260    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:56.786335    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:56.797145    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:56.797210    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:56.808170    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:56.808247    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:56.818366    5115 logs.go:276] 0 containers: []
	W0729 16:40:56.818378    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:56.818437    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:56.829497    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:56.829516    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:56.829522    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:56.840893    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:56.840904    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:56.863765    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:56.863775    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:56.898844    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:56.898851    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:56.902716    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:56.902725    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:56.924038    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:56.924050    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:56.935374    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:56.935389    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:56.946298    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:56.946309    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:56.982958    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:56.982972    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:56.997590    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:56.997600    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:57.013830    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:57.013845    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:57.026734    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:57.026748    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:57.042458    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:57.042473    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:57.060717    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:57.060729    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:57.072861    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:57.072870    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:57.110018    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:57.110030    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:57.123617    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:57.123627    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:59.637146    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:04.143545    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:04.143991    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:04.190880    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:04.191025    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:04.210862    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:04.210952    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:04.225691    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:04.225763    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:04.639365    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:04.639526    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:04.650059    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:41:04.650125    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:04.660611    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:41:04.660678    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:04.671103    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:41:04.671166    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:04.681534    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:41:04.681606    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:04.691756    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:41:04.691820    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:04.702047    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:41:04.702109    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:04.712105    5115 logs.go:276] 0 containers: []
	W0729 16:41:04.712118    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:04.712179    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:04.722614    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:41:04.722634    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:04.722641    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:04.726899    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:04.726912    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:04.761656    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:41:04.761668    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:41:04.776269    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:41:04.776281    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:41:04.787645    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:41:04.787657    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:41:04.799174    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:41:04.799185    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:41:04.836941    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:41:04.836954    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:41:04.850747    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:41:04.850757    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:41:04.872043    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:41:04.872054    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:41:04.886183    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:41:04.886194    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:41:04.897880    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:41:04.897892    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:41:04.909979    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:04.909991    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:04.933658    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:41:04.933669    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:04.945395    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:04.945407    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:04.983650    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:41:04.983664    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:41:05.005634    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:41:05.005646    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:41:05.017684    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:41:05.017694    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:41:04.240435    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:04.240503    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:04.251720    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:04.251792    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:04.263488    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:04.263554    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:04.274614    4979 logs.go:276] 0 containers: []
	W0729 16:41:04.274626    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:04.274681    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:04.285903    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:04.285921    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:04.285927    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:04.300781    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:04.300794    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:04.312485    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:04.312497    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:04.330935    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:04.330945    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:04.344472    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:04.344484    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:04.360123    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:04.360135    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:04.379222    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:04.379233    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:04.397540    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:04.397550    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:04.432554    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:04.432565    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:04.436984    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:04.436993    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:04.472281    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:04.472292    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:04.493587    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:04.493597    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:04.517594    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:04.517601    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:04.535542    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:04.535553    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:04.552597    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:04.552607    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:07.066468    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:07.531242    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:12.068823    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:12.069216    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:12.104940    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:12.105068    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:12.123861    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:12.123962    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:12.138091    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:12.138168    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:12.154534    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:12.154603    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:12.172423    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:12.172494    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:12.183236    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:12.183313    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:12.195120    4979 logs.go:276] 0 containers: []
	W0729 16:41:12.195138    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:12.195204    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:12.205659    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:12.205677    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:12.205682    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:12.210351    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:12.210358    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:12.226513    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:12.226527    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:12.238505    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:12.238529    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:12.256899    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:12.256913    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:12.269127    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:12.269137    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:12.302620    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:12.302633    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:12.314626    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:12.314643    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:12.329667    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:12.329678    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:12.345107    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:12.345117    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:12.371222    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:12.371237    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:12.405211    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:12.405223    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:12.420719    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:12.420730    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:12.432278    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:12.432289    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:12.445279    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:12.445293    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:12.533384    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:12.533481    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:12.545187    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:41:12.545259    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:12.555386    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:41:12.555455    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:12.566176    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:41:12.566253    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:12.579573    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:41:12.579645    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:12.589755    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:41:12.589822    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:12.600705    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:41:12.600783    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:12.610698    5115 logs.go:276] 0 containers: []
	W0729 16:41:12.610709    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:12.610769    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:12.621488    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:41:12.621505    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:41:12.621511    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:41:12.639589    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:41:12.639598    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:41:12.653786    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:41:12.653797    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:41:12.665715    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:12.665728    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:12.689128    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:41:12.689136    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:12.700651    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:41:12.700664    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:41:12.722694    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:41:12.722707    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:41:12.734975    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:41:12.734986    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:41:12.772794    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:41:12.772813    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:41:12.785066    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:41:12.785077    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:41:12.797336    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:41:12.797347    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:41:12.811743    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:12.811755    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:12.816410    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:12.816416    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:12.851147    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:41:12.851161    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:41:12.869727    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:41:12.869739    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:41:12.892076    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:41:12.892091    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:41:12.907756    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:12.907769    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:15.446069    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:14.960544    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:20.448169    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:20.448280    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:20.459865    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:41:20.459950    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:20.475989    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:41:20.476068    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:20.487156    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:41:20.487227    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:20.497810    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:41:20.497883    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:20.508101    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:41:20.508172    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:20.518901    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:41:20.518972    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:20.529449    5115 logs.go:276] 0 containers: []
	W0729 16:41:20.529463    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:20.529522    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:20.539788    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:41:20.539808    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:41:20.539814    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:41:20.551702    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:41:20.551712    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:20.564854    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:20.564865    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:20.601647    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:20.601658    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:20.636271    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:41:20.636286    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:41:20.658196    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:20.658208    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:20.681367    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:41:20.681375    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:41:20.695212    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:41:20.695223    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:41:20.706438    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:41:20.706450    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:41:20.717576    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:41:20.717586    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:41:20.739137    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:41:20.739149    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:41:20.751153    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:41:20.751164    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:41:20.764521    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:41:20.764535    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:41:20.778803    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:41:20.778816    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:41:20.790594    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:41:20.790605    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:41:20.801615    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:20.801626    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:20.805625    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:41:20.805631    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:41:19.962818    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:19.963037    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:19.984361    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:19.984460    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:19.999608    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:19.999693    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:20.012141    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:20.012210    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:20.022749    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:20.022821    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:20.033292    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:20.033366    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:20.044382    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:20.044455    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:20.054821    4979 logs.go:276] 0 containers: []
	W0729 16:41:20.054833    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:20.054890    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:20.065534    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:20.065553    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:20.065568    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:20.100300    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:20.100310    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:20.105097    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:20.105105    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:20.119813    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:20.119828    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:20.131801    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:20.131813    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:20.144404    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:20.144416    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:20.169774    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:20.169783    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:20.216101    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:20.216113    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:20.232328    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:20.232339    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:20.244056    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:20.244071    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:20.261609    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:20.261621    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:20.274384    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:20.274394    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:20.289748    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:20.289760    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:20.309364    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:20.309376    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:20.324879    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:20.324890    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:22.839008    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:23.345186    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:27.841337    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:27.841603    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:27.869256    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:27.869377    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:27.886496    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:27.886588    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:27.899839    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:27.899921    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:27.911635    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:27.911701    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:27.922673    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:27.922736    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:27.933159    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:27.933233    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:27.950403    4979 logs.go:276] 0 containers: []
	W0729 16:41:27.950416    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:27.950470    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:27.960478    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:27.960493    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:27.960497    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:27.993640    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:27.993652    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:28.026897    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:28.026913    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:28.041010    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:28.041022    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:28.064971    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:28.064982    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:28.081708    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:28.081726    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:28.093610    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:28.093624    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:28.107395    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:28.107416    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:28.111967    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:28.111974    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:28.129714    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:28.129725    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:28.141366    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:28.141375    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:28.157018    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:28.157028    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:28.174579    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:28.174590    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:28.189142    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:28.189153    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:28.201675    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:28.201688    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:28.347318    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:28.347350    5115 kubeadm.go:597] duration metric: took 4m3.854377s to restartPrimaryControlPlane
	W0729 16:41:28.347378    5115 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 16:41:28.347394    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 16:41:29.303577    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:41:29.309013    5115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:41:29.311954    5115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:41:29.314868    5115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:41:29.314875    5115 kubeadm.go:157] found existing configuration files:
	
	I0729 16:41:29.314900    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0729 16:41:29.317308    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:41:29.317331    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:41:29.319710    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0729 16:41:29.322684    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:41:29.322707    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:41:29.325347    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0729 16:41:29.327830    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:41:29.327853    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:41:29.330861    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0729 16:41:29.333223    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:41:29.333242    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:41:29.335914    5115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 16:41:29.353412    5115 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 16:41:29.353448    5115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 16:41:29.404968    5115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 16:41:29.405031    5115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 16:41:29.405084    5115 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 16:41:29.452487    5115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 16:41:29.456679    5115 out.go:204]   - Generating certificates and keys ...
	I0729 16:41:29.456718    5115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 16:41:29.456752    5115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 16:41:29.456791    5115 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 16:41:29.456829    5115 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 16:41:29.456860    5115 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 16:41:29.456890    5115 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 16:41:29.456926    5115 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 16:41:29.456957    5115 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 16:41:29.456991    5115 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 16:41:29.457030    5115 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 16:41:29.457053    5115 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 16:41:29.457079    5115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 16:41:29.755027    5115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 16:41:29.821682    5115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 16:41:29.924030    5115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 16:41:30.083949    5115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 16:41:30.111991    5115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 16:41:30.112307    5115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 16:41:30.112329    5115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 16:41:30.180887    5115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 16:41:30.184121    5115 out.go:204]   - Booting up control plane ...
	I0729 16:41:30.184168    5115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 16:41:30.184207    5115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 16:41:30.184249    5115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 16:41:30.184294    5115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 16:41:30.184404    5115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 16:41:30.714340    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:34.687258    5115 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502680 seconds
	I0729 16:41:34.687355    5115 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 16:41:34.691994    5115 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 16:41:35.222193    5115 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 16:41:35.222485    5115 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-170000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 16:41:35.726538    5115 kubeadm.go:310] [bootstrap-token] Using token: l2hdh8.ozx7sr07436dbjkf
	I0729 16:41:35.733014    5115 out.go:204]   - Configuring RBAC rules ...
	I0729 16:41:35.733074    5115 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 16:41:35.733114    5115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 16:41:35.740337    5115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 16:41:35.741920    5115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 16:41:35.742826    5115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 16:41:35.744092    5115 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 16:41:35.748960    5115 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 16:41:35.912058    5115 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 16:41:36.133437    5115 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 16:41:36.134160    5115 kubeadm.go:310] 
	I0729 16:41:36.134196    5115 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 16:41:36.134199    5115 kubeadm.go:310] 
	I0729 16:41:36.134250    5115 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 16:41:36.134255    5115 kubeadm.go:310] 
	I0729 16:41:36.134272    5115 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 16:41:36.134316    5115 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 16:41:36.134348    5115 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 16:41:36.134353    5115 kubeadm.go:310] 
	I0729 16:41:36.134382    5115 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 16:41:36.134386    5115 kubeadm.go:310] 
	I0729 16:41:36.134409    5115 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 16:41:36.134413    5115 kubeadm.go:310] 
	I0729 16:41:36.134441    5115 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 16:41:36.134481    5115 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 16:41:36.134524    5115 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 16:41:36.134528    5115 kubeadm.go:310] 
	I0729 16:41:36.134584    5115 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 16:41:36.134619    5115 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 16:41:36.134623    5115 kubeadm.go:310] 
	I0729 16:41:36.134675    5115 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l2hdh8.ozx7sr07436dbjkf \
	I0729 16:41:36.134735    5115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9cecc1c3dd985258772234c33c785f9bcad6eff884cc7ff19b79a518c1cf4e1 \
	I0729 16:41:36.134747    5115 kubeadm.go:310] 	--control-plane 
	I0729 16:41:36.134751    5115 kubeadm.go:310] 
	I0729 16:41:36.134796    5115 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 16:41:36.134802    5115 kubeadm.go:310] 
	I0729 16:41:36.134844    5115 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l2hdh8.ozx7sr07436dbjkf \
	I0729 16:41:36.134894    5115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9cecc1c3dd985258772234c33c785f9bcad6eff884cc7ff19b79a518c1cf4e1 
	I0729 16:41:36.135138    5115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 16:41:36.135147    5115 cni.go:84] Creating CNI manager for ""
	I0729 16:41:36.135156    5115 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:41:36.138045    5115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:41:36.144996    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:41:36.147997    5115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:41:36.152749    5115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:41:36.152793    5115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:41:36.152831    5115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-170000 minikube.k8s.io/updated_at=2024_07_29T16_41_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9 minikube.k8s.io/name=stopped-upgrade-170000 minikube.k8s.io/primary=true
	I0729 16:41:36.194923    5115 ops.go:34] apiserver oom_adj: -16
	I0729 16:41:36.194959    5115 kubeadm.go:1113] duration metric: took 42.205958ms to wait for elevateKubeSystemPrivileges
	I0729 16:41:36.195011    5115 kubeadm.go:394] duration metric: took 4m11.715501833s to StartCluster
	I0729 16:41:36.195024    5115 settings.go:142] acquiring lock: {Name:mk1df9c174f764d47de5a2c25ea0f0fc28c1d98c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:41:36.195117    5115 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:41:36.195560    5115 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/kubeconfig: {Name:mkadb977bd50641dea3f6c522a66ad62f461af12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:41:36.195741    5115 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:41:36.195780    5115 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 16:41:36.195816    5115 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-170000"
	I0729 16:41:36.195829    5115 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-170000"
	W0729 16:41:36.195832    5115 addons.go:243] addon storage-provisioner should already be in state true
	I0729 16:41:36.195844    5115 host.go:66] Checking if "stopped-upgrade-170000" exists ...
	I0729 16:41:36.195845    5115 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:41:36.195845    5115 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-170000"
	I0729 16:41:36.195872    5115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-170000"
	I0729 16:41:36.196807    5115 kapi.go:59] client config for stopped-upgrade-170000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/client.key", CAFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102460080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:41:36.196935    5115 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-170000"
	W0729 16:41:36.196939    5115 addons.go:243] addon default-storageclass should already be in state true
	I0729 16:41:36.196947    5115 host.go:66] Checking if "stopped-upgrade-170000" exists ...
	I0729 16:41:36.200017    5115 out.go:177] * Verifying Kubernetes components...
	I0729 16:41:36.200320    5115 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:41:36.204193    5115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:41:36.204200    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0729 16:41:36.207956    5115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:41:35.716522    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:35.716663    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:35.730712    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:35.730785    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:35.742428    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:35.742498    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:35.761770    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:35.761843    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:35.772685    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:35.772752    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:35.784169    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:35.784238    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:35.800266    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:35.800338    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:35.810890    4979 logs.go:276] 0 containers: []
	W0729 16:41:35.810903    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:35.810964    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:35.821822    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:35.821841    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:35.821846    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:35.836927    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:35.836940    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:35.850320    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:35.850334    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:35.865020    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:35.865031    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:35.877288    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:35.877298    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:35.892774    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:35.892786    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:35.932673    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:35.932687    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:35.947762    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:35.947774    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:35.975584    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:35.975601    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:35.988368    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:35.988381    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:36.000601    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:36.000615    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:36.014771    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:36.014788    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:36.035059    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:36.035073    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:36.047261    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:36.047275    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:36.083044    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:36.083058    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:38.589053    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:36.212014    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:41:36.214948    5115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:41:36.214954    5115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:41:36.214959    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0729 16:41:36.285041    5115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:41:36.292110    5115 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:41:36.292165    5115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:41:36.292876    5115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:41:36.296713    5115 api_server.go:72] duration metric: took 100.963708ms to wait for apiserver process to appear ...
	I0729 16:41:36.296722    5115 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:41:36.296729    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:36.330057    5115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:41:43.591093    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:43.591234    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:43.603793    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:43.603869    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:43.614252    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:43.614324    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:43.625316    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:43.625397    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:43.635935    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:43.636006    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:43.646830    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:43.646898    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:43.657144    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:43.657213    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:43.667865    4979 logs.go:276] 0 containers: []
	W0729 16:41:43.667883    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:43.667945    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:43.678433    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:43.678450    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:43.678455    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:43.689794    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:43.689806    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:43.700890    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:43.700900    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:43.733910    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:43.733918    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:43.745596    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:43.745607    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:43.749887    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:43.749896    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:43.761244    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:43.761254    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:43.772962    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:43.772972    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:43.790832    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:43.790843    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:43.802526    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:43.802536    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:43.841436    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:43.841449    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:43.862063    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:43.862074    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:43.877807    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:43.877816    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:43.901447    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:43.901455    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:43.915656    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:43.915664    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:41.298722    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:41.298756    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:46.430003    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:46.298949    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:46.298987    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:51.430413    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:51.430501    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:51.445767    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:51.445848    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:51.458472    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:51.458544    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:51.469222    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:51.469294    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:51.479542    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:51.479615    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:51.493580    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:51.493655    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:51.507168    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:51.507233    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:51.516663    4979 logs.go:276] 0 containers: []
	W0729 16:41:51.516676    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:51.516737    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:51.527547    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:51.527566    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:51.527571    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:51.542868    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:51.542878    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:51.556734    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:51.556744    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:51.568853    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:51.568865    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:51.580092    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:51.580102    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:51.604131    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:51.604148    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:51.616399    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:51.616412    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:51.649698    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:51.649709    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:51.663669    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:51.663681    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:51.684300    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:51.684310    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:51.719635    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:51.719645    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:51.731920    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:51.731930    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:51.743856    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:51.743868    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:51.755616    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:51.755626    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:41:51.768228    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:51.768238    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:51.299331    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:51.299352    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:54.274987    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:56.299674    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:56.299699    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:59.277169    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:59.277373    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:59.297363    4979 logs.go:276] 1 containers: [59c38f954feb]
	I0729 16:41:59.297459    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:59.311812    4979 logs.go:276] 1 containers: [fb7684c3d1e5]
	I0729 16:41:59.311885    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:59.323532    4979 logs.go:276] 4 containers: [171279a29803 af10021a5c6e 148e5d9f19b4 1d8ed2209c51]
	I0729 16:41:59.323606    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:59.334285    4979 logs.go:276] 1 containers: [cc0c9e0f620b]
	I0729 16:41:59.334355    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:59.345409    4979 logs.go:276] 1 containers: [3a31dd736962]
	I0729 16:41:59.345474    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:59.356225    4979 logs.go:276] 1 containers: [5baa0fe475ee]
	I0729 16:41:59.356293    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:59.367082    4979 logs.go:276] 0 containers: []
	W0729 16:41:59.367093    4979 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:59.367148    4979 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:59.377698    4979 logs.go:276] 1 containers: [20700082a4de]
	I0729 16:41:59.377715    4979 logs.go:123] Gathering logs for coredns [148e5d9f19b4] ...
	I0729 16:41:59.377724    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148e5d9f19b4"
	I0729 16:41:59.389171    4979 logs.go:123] Gathering logs for kube-scheduler [cc0c9e0f620b] ...
	I0729 16:41:59.389185    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0c9e0f620b"
	I0729 16:41:59.404630    4979 logs.go:123] Gathering logs for kube-proxy [3a31dd736962] ...
	I0729 16:41:59.404641    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a31dd736962"
	I0729 16:41:59.417654    4979 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:59.417665    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:59.441888    4979 logs.go:123] Gathering logs for container status ...
	I0729 16:41:59.441898    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:59.456523    4979 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:59.456536    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:59.492028    4979 logs.go:123] Gathering logs for etcd [fb7684c3d1e5] ...
	I0729 16:41:59.492040    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb7684c3d1e5"
	I0729 16:41:59.506136    4979 logs.go:123] Gathering logs for kube-controller-manager [5baa0fe475ee] ...
	I0729 16:41:59.506149    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5baa0fe475ee"
	I0729 16:41:59.529000    4979 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:59.529010    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:59.533260    4979 logs.go:123] Gathering logs for kube-apiserver [59c38f954feb] ...
	I0729 16:41:59.533265    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59c38f954feb"
	I0729 16:41:59.547744    4979 logs.go:123] Gathering logs for coredns [af10021a5c6e] ...
	I0729 16:41:59.547755    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af10021a5c6e"
	I0729 16:41:59.559693    4979 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:59.559706    4979 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:59.593586    4979 logs.go:123] Gathering logs for coredns [171279a29803] ...
	I0729 16:41:59.593596    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 171279a29803"
	I0729 16:41:59.605046    4979 logs.go:123] Gathering logs for coredns [1d8ed2209c51] ...
	I0729 16:41:59.605057    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d8ed2209c51"
	I0729 16:41:59.616985    4979 logs.go:123] Gathering logs for storage-provisioner [20700082a4de] ...
	I0729 16:41:59.616996    4979 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20700082a4de"
	I0729 16:42:02.130171    4979 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:01.299989    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:01.300014    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:06.300576    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:06.300604    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 16:42:06.701309    5115 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 16:42:06.705600    5115 out.go:177] * Enabled addons: storage-provisioner
	I0729 16:42:07.132252    4979 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:07.135872    4979 out.go:177] 
	W0729 16:42:07.139710    4979 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 16:42:07.139717    4979 out.go:239] * 
	W0729 16:42:07.140428    4979 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:42:07.151680    4979 out.go:177] 
	I0729 16:42:06.712459    5115 addons.go:510] duration metric: took 30.517603666s for enable addons: enabled=[storage-provisioner]
	I0729 16:42:11.301374    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:11.301418    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:16.302578    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:16.302599    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-29 23:33:16 UTC, ends at Mon 2024-07-29 23:42:23 UTC. --
	Jul 29 23:42:07 running-upgrade-896000 dockerd[3195]: time="2024-07-29T23:42:07.769387302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 23:42:07 running-upgrade-896000 dockerd[3195]: time="2024-07-29T23:42:07.769486129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 23:42:07 running-upgrade-896000 dockerd[3195]: time="2024-07-29T23:42:07.769509878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 23:42:07 running-upgrade-896000 dockerd[3195]: time="2024-07-29T23:42:07.769572415Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8d2e7bf4114923e508d398ae80773579453293bbbc1ae5aeacf9335252173151 pid=18851 runtime=io.containerd.runc.v2
	Jul 29 23:42:08 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:08Z" level=error msg="ContainerStats resp: {0x40005ab140 linux}"
	Jul 29 23:42:09 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:09Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 23:42:09 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:09Z" level=error msg="ContainerStats resp: {0x400075b980 linux}"
	Jul 29 23:42:09 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:09Z" level=error msg="ContainerStats resp: {0x400075bdc0 linux}"
	Jul 29 23:42:09 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:09Z" level=error msg="ContainerStats resp: {0x40007b1940 linux}"
	Jul 29 23:42:09 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:09Z" level=error msg="ContainerStats resp: {0x400075bfc0 linux}"
	Jul 29 23:42:09 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:09Z" level=error msg="ContainerStats resp: {0x40009d9040 linux}"
	Jul 29 23:42:09 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:09Z" level=error msg="ContainerStats resp: {0x4000a08600 linux}"
	Jul 29 23:42:09 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:09Z" level=error msg="ContainerStats resp: {0x40009d93c0 linux}"
	Jul 29 23:42:14 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:14Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 23:42:19 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:19Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 23:42:19 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:19Z" level=error msg="ContainerStats resp: {0x40005ab380 linux}"
	Jul 29 23:42:19 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:19Z" level=error msg="ContainerStats resp: {0x400075bb40 linux}"
	Jul 29 23:42:20 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:20Z" level=error msg="ContainerStats resp: {0x40005aad00 linux}"
	Jul 29 23:42:21 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:21Z" level=error msg="ContainerStats resp: {0x40005abdc0 linux}"
	Jul 29 23:42:21 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:21Z" level=error msg="ContainerStats resp: {0x40008e8d80 linux}"
	Jul 29 23:42:21 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:21Z" level=error msg="ContainerStats resp: {0x40008e9080 linux}"
	Jul 29 23:42:21 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:21Z" level=error msg="ContainerStats resp: {0x40007b0a80 linux}"
	Jul 29 23:42:21 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:21Z" level=error msg="ContainerStats resp: {0x40008e9a40 linux}"
	Jul 29 23:42:21 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:21Z" level=error msg="ContainerStats resp: {0x40007b1380 linux}"
	Jul 29 23:42:21 running-upgrade-896000 cri-dockerd[3038]: time="2024-07-29T23:42:21Z" level=error msg="ContainerStats resp: {0x40001a0540 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8d2e7bf411492       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   053aa458a1b42
	6dc173d893114       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   66a3d67314c23
	171279a29803b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   66a3d67314c23
	af10021a5c6ed       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   053aa458a1b42
	3a31dd736962b       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   fcc4712d56ddc
	20700082a4dea       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   ede96421c26a5
	cc0c9e0f620b6       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   a6c104499fe66
	fb7684c3d1e53       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   76ee62b9a2da7
	59c38f954feb6       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   f99701c317319
	5baa0fe475ee1       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   ec0c7c168703b
	
	
	==> coredns [171279a29803] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8631690946398109359.6617547210405128599. HINFO: read udp 10.244.0.2:37993->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8631690946398109359.6617547210405128599. HINFO: read udp 10.244.0.2:37739->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8631690946398109359.6617547210405128599. HINFO: read udp 10.244.0.2:43867->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8631690946398109359.6617547210405128599. HINFO: read udp 10.244.0.2:53614->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8631690946398109359.6617547210405128599. HINFO: read udp 10.244.0.2:47808->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8631690946398109359.6617547210405128599. HINFO: read udp 10.244.0.2:51534->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8631690946398109359.6617547210405128599. HINFO: read udp 10.244.0.2:51021->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8631690946398109359.6617547210405128599. HINFO: read udp 10.244.0.2:44003->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8631690946398109359.6617547210405128599. HINFO: read udp 10.244.0.2:44889->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8631690946398109359.6617547210405128599. HINFO: read udp 10.244.0.2:36114->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6dc173d89311] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5086258231309591595.4842286582670658441. HINFO: read udp 10.244.0.2:33108->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5086258231309591595.4842286582670658441. HINFO: read udp 10.244.0.2:57994->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5086258231309591595.4842286582670658441. HINFO: read udp 10.244.0.2:56480->10.0.2.3:53: i/o timeout
	
	
	==> coredns [8d2e7bf41149] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6409195324224120792.2129688457711805549. HINFO: read udp 10.244.0.3:47458->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6409195324224120792.2129688457711805549. HINFO: read udp 10.244.0.3:38723->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6409195324224120792.2129688457711805549. HINFO: read udp 10.244.0.3:58555->10.0.2.3:53: i/o timeout
	
	
	==> coredns [af10021a5c6e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3160820669435362066.2037601408819052987. HINFO: read udp 10.244.0.3:42241->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3160820669435362066.2037601408819052987. HINFO: read udp 10.244.0.3:44283->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3160820669435362066.2037601408819052987. HINFO: read udp 10.244.0.3:35711->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3160820669435362066.2037601408819052987. HINFO: read udp 10.244.0.3:60122->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3160820669435362066.2037601408819052987. HINFO: read udp 10.244.0.3:36833->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3160820669435362066.2037601408819052987. HINFO: read udp 10.244.0.3:36274->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3160820669435362066.2037601408819052987. HINFO: read udp 10.244.0.3:49188->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3160820669435362066.2037601408819052987. HINFO: read udp 10.244.0.3:56438->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3160820669435362066.2037601408819052987. HINFO: read udp 10.244.0.3:36905->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-896000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-896000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9
	                    minikube.k8s.io/name=running-upgrade-896000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T16_38_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 23:38:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-896000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 23:42:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 23:38:06 +0000   Mon, 29 Jul 2024 23:38:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 23:38:06 +0000   Mon, 29 Jul 2024 23:38:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 23:38:06 +0000   Mon, 29 Jul 2024 23:38:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 23:38:06 +0000   Mon, 29 Jul 2024 23:38:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-896000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 aed0a11d719f4f169fca1c9693a52219
	  System UUID:                aed0a11d719f4f169fca1c9693a52219
	  Boot ID:                    e78507ed-54d3-4fb9-a6f0-e73ebc342eb4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dmngz                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-tqkdv                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-896000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-896000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-896000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-proxy-glhtj                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-896000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-896000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-896000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-896000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-896000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-896000 event: Registered Node running-upgrade-896000 in Controller
	
	
	==> dmesg <==
	[  +1.949362] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.068093] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.059385] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.140729] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.075848] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.060818] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.248613] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +9.125801] systemd-fstab-generator[1915]: Ignoring "noauto" for root device
	[  +2.754119] systemd-fstab-generator[2193]: Ignoring "noauto" for root device
	[  +0.126641] systemd-fstab-generator[2226]: Ignoring "noauto" for root device
	[  +0.082758] systemd-fstab-generator[2237]: Ignoring "noauto" for root device
	[  +0.082289] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[  +2.460371] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.216417] systemd-fstab-generator[2993]: Ignoring "noauto" for root device
	[  +0.066990] systemd-fstab-generator[3006]: Ignoring "noauto" for root device
	[  +0.062794] systemd-fstab-generator[3017]: Ignoring "noauto" for root device
	[  +0.070268] systemd-fstab-generator[3031]: Ignoring "noauto" for root device
	[  +2.239730] systemd-fstab-generator[3181]: Ignoring "noauto" for root device
	[  +3.207811] systemd-fstab-generator[3554]: Ignoring "noauto" for root device
	[  +1.474196] systemd-fstab-generator[3860]: Ignoring "noauto" for root device
	[Jul29 23:34] kauditd_printk_skb: 68 callbacks suppressed
	[Jul29 23:37] kauditd_printk_skb: 23 callbacks suppressed
	[Jul29 23:38] systemd-fstab-generator[11908]: Ignoring "noauto" for root device
	[  +5.624329] systemd-fstab-generator[12503]: Ignoring "noauto" for root device
	[  +0.465786] systemd-fstab-generator[12636]: Ignoring "noauto" for root device
	
	
	==> etcd [fb7684c3d1e5] <==
	{"level":"info","ts":"2024-07-29T23:38:01.644Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-29T23:38:01.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-29T23:38:01.644Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-29T23:38:01.645Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T23:38:01.645Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T23:38:01.644Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T23:38:01.645Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T23:38:02.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T23:38:02.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T23:38:02.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-29T23:38:02.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T23:38:02.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T23:38:02.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T23:38:02.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T23:38:02.540Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-896000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T23:38:02.541Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T23:38:02.541Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T23:38:02.541Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T23:38:02.541Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T23:38:02.541Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-29T23:38:02.541Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T23:38:02.541Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T23:38:02.541Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T23:38:02.541Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T23:38:02.542Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:42:23 up 9 min,  0 users,  load average: 0.52, 0.31, 0.17
	Linux running-upgrade-896000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [59c38f954feb] <==
	I0729 23:38:03.774265       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 23:38:03.774638       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 23:38:03.775173       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 23:38:03.775272       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 23:38:03.776255       1 cache.go:39] Caches are synced for autoregister controller
	I0729 23:38:03.792749       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0729 23:38:03.805481       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 23:38:04.501932       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 23:38:04.686513       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 23:38:04.693475       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 23:38:04.693513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 23:38:04.856319       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 23:38:04.866364       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 23:38:04.944121       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0729 23:38:04.947628       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0729 23:38:04.948019       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 23:38:04.949288       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 23:38:05.844067       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 23:38:06.226563       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 23:38:06.230215       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0729 23:38:06.251922       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 23:38:06.288780       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 23:38:18.794094       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0729 23:38:19.693669       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0729 23:38:20.927789       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [5baa0fe475ee] <==
	I0729 23:38:18.701260       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0729 23:38:18.703736       1 range_allocator.go:374] Set node running-upgrade-896000 PodCIDR to [10.244.0.0/24]
	I0729 23:38:18.703796       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0729 23:38:18.742836       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 23:38:18.753030       1 shared_informer.go:262] Caches are synced for taint
	I0729 23:38:18.753071       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0729 23:38:18.753095       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-896000. Assuming now as a timestamp.
	I0729 23:38:18.753126       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0729 23:38:18.753190       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 23:38:18.753264       1 event.go:294] "Event occurred" object="running-upgrade-896000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-896000 event: Registered Node running-upgrade-896000 in Controller"
	I0729 23:38:18.763414       1 shared_informer.go:262] Caches are synced for cronjob
	I0729 23:38:18.790165       1 shared_informer.go:262] Caches are synced for job
	I0729 23:38:18.797168       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0729 23:38:18.812511       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 23:38:18.818825       1 shared_informer.go:262] Caches are synced for stateful set
	I0729 23:38:18.841158       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0729 23:38:18.849089       1 shared_informer.go:262] Caches are synced for daemon sets
	I0729 23:38:18.870066       1 shared_informer.go:262] Caches are synced for persistent volume
	I0729 23:38:18.894353       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 23:38:19.316338       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 23:38:19.346117       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 23:38:19.346171       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0729 23:38:19.644502       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dmngz"
	I0729 23:38:19.646936       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-tqkdv"
	I0729 23:38:19.696106       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-glhtj"
	
	
	==> kube-proxy [3a31dd736962] <==
	I0729 23:38:20.911219       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0729 23:38:20.911255       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0729 23:38:20.911280       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 23:38:20.925138       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 23:38:20.925146       1 server_others.go:206] "Using iptables Proxier"
	I0729 23:38:20.925161       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 23:38:20.925270       1 server.go:661] "Version info" version="v1.24.1"
	I0729 23:38:20.925273       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 23:38:20.925761       1 config.go:317] "Starting service config controller"
	I0729 23:38:20.925764       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 23:38:20.925773       1 config.go:226] "Starting endpoint slice config controller"
	I0729 23:38:20.925774       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 23:38:20.925936       1 config.go:444] "Starting node config controller"
	I0729 23:38:20.925938       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 23:38:21.029596       1 shared_informer.go:262] Caches are synced for service config
	I0729 23:38:21.029596       1 shared_informer.go:262] Caches are synced for node config
	I0729 23:38:21.029678       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cc0c9e0f620b] <==
	W0729 23:38:03.734191       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 23:38:03.734198       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 23:38:03.734214       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 23:38:03.734218       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 23:38:03.734234       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 23:38:03.734238       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 23:38:03.734261       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 23:38:03.734279       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 23:38:04.582460       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 23:38:04.582522       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 23:38:04.606352       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 23:38:04.606512       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 23:38:04.647231       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 23:38:04.647362       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 23:38:04.737798       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 23:38:04.737825       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 23:38:04.762753       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 23:38:04.762838       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 23:38:04.777773       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 23:38:04.777785       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 23:38:04.784224       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 23:38:04.784287       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 23:38:04.817986       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 23:38:04.818068       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0729 23:38:06.520378       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-29 23:33:16 UTC, ends at Mon 2024-07-29 23:42:23 UTC. --
	Jul 29 23:38:18 running-upgrade-896000 kubelet[12509]: I0729 23:38:18.758435   12509 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 23:38:18 running-upgrade-896000 kubelet[12509]: I0729 23:38:18.775096   12509 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 23:38:18 running-upgrade-896000 kubelet[12509]: I0729 23:38:18.775475   12509 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 23:38:18 running-upgrade-896000 kubelet[12509]: I0729 23:38:18.876194   12509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/280f1357-74f7-479c-8983-52667461c4a0-tmp\") pod \"storage-provisioner\" (UID: \"280f1357-74f7-479c-8983-52667461c4a0\") " pod="kube-system/storage-provisioner"
	Jul 29 23:38:18 running-upgrade-896000 kubelet[12509]: I0729 23:38:18.876215   12509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf27t\" (UniqueName: \"kubernetes.io/projected/280f1357-74f7-479c-8983-52667461c4a0-kube-api-access-gf27t\") pod \"storage-provisioner\" (UID: \"280f1357-74f7-479c-8983-52667461c4a0\") " pod="kube-system/storage-provisioner"
	Jul 29 23:38:18 running-upgrade-896000 kubelet[12509]: E0729 23:38:18.980828   12509 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 23:38:18 running-upgrade-896000 kubelet[12509]: E0729 23:38:18.980849   12509 projected.go:192] Error preparing data for projected volume kube-api-access-gf27t for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 29 23:38:18 running-upgrade-896000 kubelet[12509]: E0729 23:38:18.980889   12509 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/280f1357-74f7-479c-8983-52667461c4a0-kube-api-access-gf27t podName:280f1357-74f7-479c-8983-52667461c4a0 nodeName:}" failed. No retries permitted until 2024-07-29 23:38:19.480874747 +0000 UTC m=+13.263327373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gf27t" (UniqueName: "kubernetes.io/projected/280f1357-74f7-479c-8983-52667461c4a0-kube-api-access-gf27t") pod "storage-provisioner" (UID: "280f1357-74f7-479c-8983-52667461c4a0") : configmap "kube-root-ca.crt" not found
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: E0729 23:38:19.487841   12509 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: E0729 23:38:19.487860   12509 projected.go:192] Error preparing data for projected volume kube-api-access-gf27t for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: E0729 23:38:19.487889   12509 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/280f1357-74f7-479c-8983-52667461c4a0-kube-api-access-gf27t podName:280f1357-74f7-479c-8983-52667461c4a0 nodeName:}" failed. No retries permitted until 2024-07-29 23:38:20.487880139 +0000 UTC m=+14.270332766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gf27t" (UniqueName: "kubernetes.io/projected/280f1357-74f7-479c-8983-52667461c4a0-kube-api-access-gf27t") pod "storage-provisioner" (UID: "280f1357-74f7-479c-8983-52667461c4a0") : configmap "kube-root-ca.crt" not found
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.649479   12509 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.652770   12509 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.698539   12509 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.789845   12509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a3ddbaf-512a-4915-835c-8111bf0bcbcb-config-volume\") pod \"coredns-6d4b75cb6d-dmngz\" (UID: \"4a3ddbaf-512a-4915-835c-8111bf0bcbcb\") " pod="kube-system/coredns-6d4b75cb6d-dmngz"
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.790093   12509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vh2g\" (UniqueName: \"kubernetes.io/projected/4a3ddbaf-512a-4915-835c-8111bf0bcbcb-kube-api-access-2vh2g\") pod \"coredns-6d4b75cb6d-dmngz\" (UID: \"4a3ddbaf-512a-4915-835c-8111bf0bcbcb\") " pod="kube-system/coredns-6d4b75cb6d-dmngz"
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.790120   12509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf6d5ff7-ea89-450f-82b7-780f9df91872-config-volume\") pod \"coredns-6d4b75cb6d-tqkdv\" (UID: \"cf6d5ff7-ea89-450f-82b7-780f9df91872\") " pod="kube-system/coredns-6d4b75cb6d-tqkdv"
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.790143   12509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmxmf\" (UniqueName: \"kubernetes.io/projected/cf6d5ff7-ea89-450f-82b7-780f9df91872-kube-api-access-qmxmf\") pod \"coredns-6d4b75cb6d-tqkdv\" (UID: \"cf6d5ff7-ea89-450f-82b7-780f9df91872\") " pod="kube-system/coredns-6d4b75cb6d-tqkdv"
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.890246   12509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xjpm\" (UniqueName: \"kubernetes.io/projected/b4d8da07-5575-4ee4-9dfe-2eba4ab68180-kube-api-access-7xjpm\") pod \"kube-proxy-glhtj\" (UID: \"b4d8da07-5575-4ee4-9dfe-2eba4ab68180\") " pod="kube-system/kube-proxy-glhtj"
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.890279   12509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4d8da07-5575-4ee4-9dfe-2eba4ab68180-lib-modules\") pod \"kube-proxy-glhtj\" (UID: \"b4d8da07-5575-4ee4-9dfe-2eba4ab68180\") " pod="kube-system/kube-proxy-glhtj"
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.890335   12509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4d8da07-5575-4ee4-9dfe-2eba4ab68180-kube-proxy\") pod \"kube-proxy-glhtj\" (UID: \"b4d8da07-5575-4ee4-9dfe-2eba4ab68180\") " pod="kube-system/kube-proxy-glhtj"
	Jul 29 23:38:19 running-upgrade-896000 kubelet[12509]: I0729 23:38:19.890362   12509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4d8da07-5575-4ee4-9dfe-2eba4ab68180-xtables-lock\") pod \"kube-proxy-glhtj\" (UID: \"b4d8da07-5575-4ee4-9dfe-2eba4ab68180\") " pod="kube-system/kube-proxy-glhtj"
	Jul 29 23:38:20 running-upgrade-896000 kubelet[12509]: I0729 23:38:20.420695   12509 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="66a3d67314c23e9f140ebe8548badb482cc5999a890a7b10ba7e87469a42996c"
	Jul 29 23:42:08 running-upgrade-896000 kubelet[12509]: I0729 23:42:08.632139   12509 scope.go:110] "RemoveContainer" containerID="1d8ed2209c513afbec411483c6ceff3b43befd1034cb4b5e1f13a237ed56800f"
	Jul 29 23:42:08 running-upgrade-896000 kubelet[12509]: I0729 23:42:08.650234   12509 scope.go:110] "RemoveContainer" containerID="148e5d9f19b4b883eef3eb994f0806857acf8bbf93e0ab7a4bc36ef19ee15227"
	
	
	==> storage-provisioner [20700082a4de] <==
	I0729 23:38:20.858023       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 23:38:20.877552       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 23:38:20.877658       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 23:38:20.884762       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 23:38:20.885560       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd4947c9-839f-4f4c-880e-5ca2cc234835", APIVersion:"v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-896000_a4d2555e-bfab-4316-8a60-0037a7b41474 became leader
	I0729 23:38:20.885579       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-896000_a4d2555e-bfab-4316-8a60-0037a7b41474!
	I0729 23:38:20.986590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-896000_a4d2555e-bfab-4316-8a60-0037a7b41474!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-896000 -n running-upgrade-896000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-896000 -n running-upgrade-896000: exit status 2 (15.625195416s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-896000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-896000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-896000
--- FAIL: TestRunningBinaryUpgrade (604.61s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.825784375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-507000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-507000" primary control-plane node in "kubernetes-upgrade-507000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-507000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:35:37.641817    5049 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:35:37.641960    5049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:35:37.641966    5049 out.go:304] Setting ErrFile to fd 2...
	I0729 16:35:37.641968    5049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:35:37.642088    5049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:35:37.643173    5049 out.go:298] Setting JSON to false
	I0729 16:35:37.659928    5049 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3904,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:35:37.660003    5049 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:35:37.666614    5049 out.go:177] * [kubernetes-upgrade-507000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:35:37.674663    5049 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:35:37.674694    5049 notify.go:220] Checking for updates...
	I0729 16:35:37.681634    5049 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:35:37.684673    5049 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:35:37.687563    5049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:35:37.690574    5049 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:35:37.693527    5049 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:35:37.696989    5049 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:35:37.697061    5049 config.go:182] Loaded profile config "running-upgrade-896000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:35:37.697111    5049 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:35:37.701586    5049 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:35:37.708620    5049 start.go:297] selected driver: qemu2
	I0729 16:35:37.708629    5049 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:35:37.708635    5049 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:35:37.710908    5049 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:35:37.714608    5049 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:35:37.717672    5049 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 16:35:37.717707    5049 cni.go:84] Creating CNI manager for ""
	I0729 16:35:37.717717    5049 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:35:37.717743    5049 start.go:340] cluster config:
	{Name:kubernetes-upgrade-507000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:35:37.721404    5049 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:35:37.728393    5049 out.go:177] * Starting "kubernetes-upgrade-507000" primary control-plane node in "kubernetes-upgrade-507000" cluster
	I0729 16:35:37.732577    5049 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:35:37.732594    5049 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:35:37.732606    5049 cache.go:56] Caching tarball of preloaded images
	I0729 16:35:37.732666    5049 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:35:37.732672    5049 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:35:37.732742    5049 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/kubernetes-upgrade-507000/config.json ...
	I0729 16:35:37.732753    5049 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/kubernetes-upgrade-507000/config.json: {Name:mkc5dc12c3ff8eb341fdf2544aefe6f352a38c87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:35:37.733075    5049 start.go:360] acquireMachinesLock for kubernetes-upgrade-507000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:35:37.733106    5049 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "kubernetes-upgrade-507000"
	I0729 16:35:37.733116    5049 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-507000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:35:37.733143    5049 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:35:37.741560    5049 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:35:37.757336    5049 start.go:159] libmachine.API.Create for "kubernetes-upgrade-507000" (driver="qemu2")
	I0729 16:35:37.757355    5049 client.go:168] LocalClient.Create starting
	I0729 16:35:37.757433    5049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:35:37.757464    5049 main.go:141] libmachine: Decoding PEM data...
	I0729 16:35:37.757472    5049 main.go:141] libmachine: Parsing certificate...
	I0729 16:35:37.757507    5049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:35:37.757529    5049 main.go:141] libmachine: Decoding PEM data...
	I0729 16:35:37.757535    5049 main.go:141] libmachine: Parsing certificate...
	I0729 16:35:37.757995    5049 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:35:37.906417    5049 main.go:141] libmachine: Creating SSH key...
	I0729 16:35:37.961108    5049 main.go:141] libmachine: Creating Disk image...
	I0729 16:35:37.961114    5049 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:35:37.961310    5049 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I0729 16:35:37.970673    5049 main.go:141] libmachine: STDOUT: 
	I0729 16:35:37.970694    5049 main.go:141] libmachine: STDERR: 
	I0729 16:35:37.970753    5049 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2 +20000M
	I0729 16:35:37.978630    5049 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:35:37.978644    5049 main.go:141] libmachine: STDERR: 
	I0729 16:35:37.978659    5049 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I0729 16:35:37.978664    5049 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:35:37.978681    5049 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:35:37.978709    5049 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:9f:b8:e2:27:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I0729 16:35:37.980228    5049 main.go:141] libmachine: STDOUT: 
	I0729 16:35:37.980244    5049 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:35:37.980264    5049 client.go:171] duration metric: took 222.912167ms to LocalClient.Create
	I0729 16:35:39.982296    5049 start.go:128] duration metric: took 2.249209209s to createHost
	I0729 16:35:39.982326    5049 start.go:83] releasing machines lock for "kubernetes-upgrade-507000", held for 2.249282792s
	W0729 16:35:39.982371    5049 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:35:39.991361    5049 out.go:177] * Deleting "kubernetes-upgrade-507000" in qemu2 ...
	W0729 16:35:40.008072    5049 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:35:40.008082    5049 start.go:729] Will try again in 5 seconds ...
	I0729 16:35:45.010175    5049 start.go:360] acquireMachinesLock for kubernetes-upgrade-507000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:35:45.010482    5049 start.go:364] duration metric: took 235.917µs to acquireMachinesLock for "kubernetes-upgrade-507000"
	I0729 16:35:45.010594    5049 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-507000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:35:45.010740    5049 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:35:45.019012    5049 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:35:45.049600    5049 start.go:159] libmachine.API.Create for "kubernetes-upgrade-507000" (driver="qemu2")
	I0729 16:35:45.049649    5049 client.go:168] LocalClient.Create starting
	I0729 16:35:45.049751    5049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:35:45.049811    5049 main.go:141] libmachine: Decoding PEM data...
	I0729 16:35:45.049824    5049 main.go:141] libmachine: Parsing certificate...
	I0729 16:35:45.049886    5049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:35:45.049921    5049 main.go:141] libmachine: Decoding PEM data...
	I0729 16:35:45.049935    5049 main.go:141] libmachine: Parsing certificate...
	I0729 16:35:45.050358    5049 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:35:45.204002    5049 main.go:141] libmachine: Creating SSH key...
	I0729 16:35:45.373674    5049 main.go:141] libmachine: Creating Disk image...
	I0729 16:35:45.373692    5049 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:35:45.373894    5049 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I0729 16:35:45.383582    5049 main.go:141] libmachine: STDOUT: 
	I0729 16:35:45.383603    5049 main.go:141] libmachine: STDERR: 
	I0729 16:35:45.383680    5049 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2 +20000M
	I0729 16:35:45.391646    5049 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:35:45.391660    5049 main.go:141] libmachine: STDERR: 
	I0729 16:35:45.391674    5049 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I0729 16:35:45.391679    5049 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:35:45.391689    5049 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:35:45.391731    5049 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:92:df:a8:7a:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I0729 16:35:45.393321    5049 main.go:141] libmachine: STDOUT: 
	I0729 16:35:45.393334    5049 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:35:45.393351    5049 client.go:171] duration metric: took 343.7085ms to LocalClient.Create
	I0729 16:35:47.395505    5049 start.go:128] duration metric: took 2.384799709s to createHost
	I0729 16:35:47.395584    5049 start.go:83] releasing machines lock for "kubernetes-upgrade-507000", held for 2.385133625s
	W0729 16:35:47.395969    5049 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-507000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-507000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:35:47.405545    5049 out.go:177] 
	W0729 16:35:47.413736    5049 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:35:47.413791    5049 out.go:239] * 
	* 
	W0729 16:35:47.416307    5049 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:35:47.429537    5049 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-507000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-507000: (2.084076209s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-507000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-507000 status --format={{.Host}}: exit status 7 (57.227958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.184378958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-507000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-507000" primary control-plane node in "kubernetes-upgrade-507000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-507000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-507000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:35:49.614554    5080 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:35:49.614698    5080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:35:49.614704    5080 out.go:304] Setting ErrFile to fd 2...
	I0729 16:35:49.614707    5080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:35:49.614849    5080 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:35:49.615873    5080 out.go:298] Setting JSON to false
	I0729 16:35:49.632174    5080 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3916,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:35:49.632250    5080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:35:49.637803    5080 out.go:177] * [kubernetes-upgrade-507000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:35:49.645664    5080 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:35:49.645708    5080 notify.go:220] Checking for updates...
	I0729 16:35:49.651597    5080 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:35:49.654677    5080 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:35:49.657624    5080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:35:49.660655    5080 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:35:49.663673    5080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:35:49.665195    5080 config.go:182] Loaded profile config "kubernetes-upgrade-507000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 16:35:49.665468    5080 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:35:49.669588    5080 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:35:49.676514    5080 start.go:297] selected driver: qemu2
	I0729 16:35:49.676525    5080 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-507000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:35:49.676589    5080 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:35:49.678977    5080 cni.go:84] Creating CNI manager for ""
	I0729 16:35:49.678993    5080 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:35:49.679023    5080 start.go:340] cluster config:
	{Name:kubernetes-upgrade-507000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-507000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:35:49.682497    5080 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:35:49.691662    5080 out.go:177] * Starting "kubernetes-upgrade-507000" primary control-plane node in "kubernetes-upgrade-507000" cluster
	I0729 16:35:49.695603    5080 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:35:49.695620    5080 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 16:35:49.695630    5080 cache.go:56] Caching tarball of preloaded images
	I0729 16:35:49.695695    5080 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:35:49.695702    5080 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 16:35:49.695763    5080 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/kubernetes-upgrade-507000/config.json ...
	I0729 16:35:49.696271    5080 start.go:360] acquireMachinesLock for kubernetes-upgrade-507000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:35:49.696300    5080 start.go:364] duration metric: took 22.042µs to acquireMachinesLock for "kubernetes-upgrade-507000"
	I0729 16:35:49.696309    5080 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:35:49.696315    5080 fix.go:54] fixHost starting: 
	I0729 16:35:49.696425    5080 fix.go:112] recreateIfNeeded on kubernetes-upgrade-507000: state=Stopped err=<nil>
	W0729 16:35:49.696433    5080 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:35:49.704547    5080 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-507000" ...
	I0729 16:35:49.708671    5080 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:35:49.708706    5080 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:92:df:a8:7a:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I0729 16:35:49.710817    5080 main.go:141] libmachine: STDOUT: 
	I0729 16:35:49.710835    5080 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:35:49.710864    5080 fix.go:56] duration metric: took 14.548167ms for fixHost
	I0729 16:35:49.710870    5080 start.go:83] releasing machines lock for "kubernetes-upgrade-507000", held for 14.566583ms
	W0729 16:35:49.710876    5080 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:35:49.710916    5080 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:35:49.710920    5080 start.go:729] Will try again in 5 seconds ...
	I0729 16:35:54.713086    5080 start.go:360] acquireMachinesLock for kubernetes-upgrade-507000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:35:54.713616    5080 start.go:364] duration metric: took 409.375µs to acquireMachinesLock for "kubernetes-upgrade-507000"
	I0729 16:35:54.713773    5080 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:35:54.713794    5080 fix.go:54] fixHost starting: 
	I0729 16:35:54.714526    5080 fix.go:112] recreateIfNeeded on kubernetes-upgrade-507000: state=Stopped err=<nil>
	W0729 16:35:54.714552    5080 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:35:54.722938    5080 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-507000" ...
	I0729 16:35:54.726976    5080 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:35:54.727210    5080 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:92:df:a8:7a:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I0729 16:35:54.736811    5080 main.go:141] libmachine: STDOUT: 
	I0729 16:35:54.736865    5080 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:35:54.736945    5080 fix.go:56] duration metric: took 23.154625ms for fixHost
	I0729 16:35:54.736964    5080 start.go:83] releasing machines lock for "kubernetes-upgrade-507000", held for 23.324584ms
	W0729 16:35:54.737140    5080 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-507000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-507000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:35:54.743866    5080 out.go:177] 
	W0729 16:35:54.746960    5080 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:35:54.746979    5080 out.go:239] * 
	* 
	W0729 16:35:54.749124    5080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:35:54.755919    5080 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-507000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-507000 version --output=json: exit status 1 (52.802167ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-507000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-29 16:35:54.822208 -0700 PDT m=+2966.378086334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-507000 -n kubernetes-upgrade-507000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-507000 -n kubernetes-upgrade-507000: exit status 7 (30.994917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-507000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-507000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-507000
--- FAIL: TestKubernetesUpgrade (17.31s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.5s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19348
- KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2866758914/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.50s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.16s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19348
- KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2818490808/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (581.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2260718269 start -p stopped-upgrade-170000 --memory=2200 --vm-driver=qemu2 
E0729 16:36:18.149028    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2260718269 start -p stopped-upgrade-170000 --memory=2200 --vm-driver=qemu2 : (47.672817875s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2260718269 -p stopped-upgrade-170000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2260718269 -p stopped-upgrade-170000 stop: (12.089778209s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-170000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0729 16:39:39.530646    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 16:41:18.140374    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-170000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.160346416s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-170000" primary control-plane node in "stopped-upgrade-170000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-170000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:36:56.113835    5115 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:36:56.113983    5115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:36:56.113986    5115 out.go:304] Setting ErrFile to fd 2...
	I0729 16:36:56.113989    5115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:36:56.114102    5115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:36:56.115167    5115 out.go:298] Setting JSON to false
	I0729 16:36:56.132597    5115 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3983,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:36:56.132675    5115 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:36:56.138051    5115 out.go:177] * [stopped-upgrade-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:36:56.146103    5115 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:36:56.146168    5115 notify.go:220] Checking for updates...
	I0729 16:36:56.153909    5115 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:36:56.157097    5115 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:36:56.160082    5115 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:36:56.163068    5115 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:36:56.166007    5115 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:36:56.169319    5115 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:36:56.172043    5115 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 16:36:56.175026    5115 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:36:56.179055    5115 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:36:56.186104    5115 start.go:297] selected driver: qemu2
	I0729 16:36:56.186112    5115 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:36:56.186169    5115 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:36:56.188834    5115 cni.go:84] Creating CNI manager for ""
	I0729 16:36:56.188853    5115 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:36:56.188887    5115 start.go:340] cluster config:
	{Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:36:56.188944    5115 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:36:56.196021    5115 out.go:177] * Starting "stopped-upgrade-170000" primary control-plane node in "stopped-upgrade-170000" cluster
	I0729 16:36:56.199982    5115 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:36:56.199996    5115 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 16:36:56.200002    5115 cache.go:56] Caching tarball of preloaded images
	I0729 16:36:56.200051    5115 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:36:56.200057    5115 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 16:36:56.200111    5115 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/config.json ...
	I0729 16:36:56.200598    5115 start.go:360] acquireMachinesLock for stopped-upgrade-170000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:36:56.200634    5115 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "stopped-upgrade-170000"
	I0729 16:36:56.200646    5115 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:36:56.200652    5115 fix.go:54] fixHost starting: 
	I0729 16:36:56.200770    5115 fix.go:112] recreateIfNeeded on stopped-upgrade-170000: state=Stopped err=<nil>
	W0729 16:36:56.200779    5115 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:36:56.208037    5115 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-170000" ...
	I0729 16:36:56.212078    5115 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:36:56.212168    5115 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50469-:22,hostfwd=tcp::50470-:2376,hostname=stopped-upgrade-170000 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/disk.qcow2
	I0729 16:36:56.260009    5115 main.go:141] libmachine: STDOUT: 
	I0729 16:36:56.260039    5115 main.go:141] libmachine: STDERR: 
	I0729 16:36:56.260045    5115 main.go:141] libmachine: Waiting for VM to start (ssh -p 50469 docker@127.0.0.1)...
	I0729 16:37:16.110640    5115 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/config.json ...
	I0729 16:37:16.110997    5115 machine.go:94] provisionDockerMachine start ...
	I0729 16:37:16.111073    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.111274    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.111280    5115 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 16:37:16.165331    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 16:37:16.165342    5115 buildroot.go:166] provisioning hostname "stopped-upgrade-170000"
	I0729 16:37:16.165395    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.165506    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.165512    5115 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-170000 && echo "stopped-upgrade-170000" | sudo tee /etc/hostname
	I0729 16:37:16.220984    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-170000
	
	I0729 16:37:16.221032    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.221136    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.221144    5115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-170000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-170000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-170000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 16:37:16.275618    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 16:37:16.275628    5115 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19348-1218/.minikube CaCertPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19348-1218/.minikube}
	I0729 16:37:16.275635    5115 buildroot.go:174] setting up certificates
	I0729 16:37:16.275639    5115 provision.go:84] configureAuth start
	I0729 16:37:16.275650    5115 provision.go:143] copyHostCerts
	I0729 16:37:16.275718    5115 exec_runner.go:144] found /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.pem, removing ...
	I0729 16:37:16.275724    5115 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.pem
	I0729 16:37:16.276036    5115 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.pem (1082 bytes)
	I0729 16:37:16.276226    5115 exec_runner.go:144] found /Users/jenkins/minikube-integration/19348-1218/.minikube/cert.pem, removing ...
	I0729 16:37:16.276230    5115 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19348-1218/.minikube/cert.pem
	I0729 16:37:16.276284    5115 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19348-1218/.minikube/cert.pem (1123 bytes)
	I0729 16:37:16.276386    5115 exec_runner.go:144] found /Users/jenkins/minikube-integration/19348-1218/.minikube/key.pem, removing ...
	I0729 16:37:16.276389    5115 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19348-1218/.minikube/key.pem
	I0729 16:37:16.276436    5115 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19348-1218/.minikube/key.pem (1675 bytes)
	I0729 16:37:16.276558    5115 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-170000 san=[127.0.0.1 localhost minikube stopped-upgrade-170000]
	I0729 16:37:16.341478    5115 provision.go:177] copyRemoteCerts
	I0729 16:37:16.341507    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 16:37:16.341514    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0729 16:37:16.370825    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 16:37:16.377219    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 16:37:16.383896    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 16:37:16.391251    5115 provision.go:87] duration metric: took 115.599709ms to configureAuth
	I0729 16:37:16.391260    5115 buildroot.go:189] setting minikube options for container-runtime
	I0729 16:37:16.391369    5115 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:37:16.391413    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.391510    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.391514    5115 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 16:37:16.441325    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 16:37:16.441332    5115 buildroot.go:70] root file system type: tmpfs
	I0729 16:37:16.441382    5115 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 16:37:16.441423    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.441522    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.441554    5115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 16:37:16.497094    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 16:37:16.497138    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.497253    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.497261    5115 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 16:37:16.831352    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 16:37:16.831364    5115 machine.go:97] duration metric: took 720.379959ms to provisionDockerMachine
	I0729 16:37:16.831371    5115 start.go:293] postStartSetup for "stopped-upgrade-170000" (driver="qemu2")
	I0729 16:37:16.831378    5115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 16:37:16.831441    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 16:37:16.831463    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0729 16:37:16.859340    5115 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 16:37:16.860551    5115 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 16:37:16.860558    5115 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19348-1218/.minikube/addons for local assets ...
	I0729 16:37:16.860640    5115 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19348-1218/.minikube/files for local assets ...
	I0729 16:37:16.860757    5115 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem -> 17142.pem in /etc/ssl/certs
	I0729 16:37:16.860887    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 16:37:16.863921    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem --> /etc/ssl/certs/17142.pem (1708 bytes)
	I0729 16:37:16.871063    5115 start.go:296] duration metric: took 39.68825ms for postStartSetup
	I0729 16:37:16.871077    5115 fix.go:56] duration metric: took 20.6710505s for fixHost
	I0729 16:37:16.871111    5115 main.go:141] libmachine: Using SSH client type: native
	I0729 16:37:16.871226    5115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010caa10] 0x1010cd270 <nil>  [] 0s} localhost 50469 <nil> <nil>}
	I0729 16:37:16.871231    5115 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 16:37:16.920941    5115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722296236.853030379
	
	I0729 16:37:16.920951    5115 fix.go:216] guest clock: 1722296236.853030379
	I0729 16:37:16.920955    5115 fix.go:229] Guest: 2024-07-29 16:37:16.853030379 -0700 PDT Remote: 2024-07-29 16:37:16.871079 -0700 PDT m=+20.776733126 (delta=-18.048621ms)
	I0729 16:37:16.920970    5115 fix.go:200] guest clock delta is within tolerance: -18.048621ms
	I0729 16:37:16.920973    5115 start.go:83] releasing machines lock for "stopped-upgrade-170000", held for 20.720958875s
	I0729 16:37:16.921035    5115 ssh_runner.go:195] Run: cat /version.json
	I0729 16:37:16.921038    5115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 16:37:16.921044    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0729 16:37:16.921056    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	W0729 16:37:16.921559    5115 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50469: connect: connection refused
	I0729 16:37:16.921581    5115 retry.go:31] will retry after 281.037628ms: dial tcp [::1]:50469: connect: connection refused
	W0729 16:37:17.249802    5115 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 16:37:17.249941    5115 ssh_runner.go:195] Run: systemctl --version
	I0729 16:37:17.254173    5115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 16:37:17.257089    5115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 16:37:17.257141    5115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 16:37:17.261610    5115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 16:37:17.268574    5115 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 16:37:17.268585    5115 start.go:495] detecting cgroup driver to use...
	I0729 16:37:17.268677    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:37:17.281971    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 16:37:17.285517    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 16:37:17.288871    5115 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 16:37:17.288907    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 16:37:17.292321    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:37:17.295950    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 16:37:17.298825    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 16:37:17.301844    5115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 16:37:17.305292    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 16:37:17.310548    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 16:37:17.314775    5115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 16:37:17.317914    5115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 16:37:17.320723    5115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 16:37:17.323687    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:17.386693    5115 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 16:37:17.393173    5115 start.go:495] detecting cgroup driver to use...
	I0729 16:37:17.393260    5115 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 16:37:17.398798    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:37:17.403912    5115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 16:37:17.409845    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 16:37:17.414006    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:37:17.418033    5115 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 16:37:17.479150    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 16:37:17.484179    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 16:37:17.489479    5115 ssh_runner.go:195] Run: which cri-dockerd
	I0729 16:37:17.490502    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 16:37:17.493202    5115 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 16:37:17.498096    5115 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 16:37:17.559553    5115 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 16:37:17.624261    5115 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 16:37:17.624328    5115 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 16:37:17.629662    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:17.690610    5115 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:37:18.858812    5115 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.168219792s)
	I0729 16:37:18.858874    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 16:37:18.864298    5115 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 16:37:18.870394    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:37:18.874670    5115 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 16:37:18.946682    5115 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 16:37:19.006235    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:19.067537    5115 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 16:37:19.073738    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 16:37:19.077986    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:19.143614    5115 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 16:37:19.184880    5115 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 16:37:19.184963    5115 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 16:37:19.187473    5115 start.go:563] Will wait 60s for crictl version
	I0729 16:37:19.187529    5115 ssh_runner.go:195] Run: which crictl
	I0729 16:37:19.188753    5115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 16:37:19.203480    5115 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 16:37:19.203550    5115 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:37:19.218799    5115 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 16:37:19.241506    5115 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 16:37:19.241568    5115 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 16:37:19.242916    5115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:37:19.246750    5115 kubeadm.go:883] updating cluster {Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 16:37:19.246795    5115 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 16:37:19.246835    5115 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:37:19.257427    5115 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:37:19.257435    5115 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:37:19.257482    5115 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:37:19.260586    5115 ssh_runner.go:195] Run: which lz4
	I0729 16:37:19.261846    5115 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 16:37:19.263004    5115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 16:37:19.263013    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 16:37:20.213454    5115 docker.go:649] duration metric: took 951.670292ms to copy over tarball
	I0729 16:37:20.213513    5115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 16:37:21.379435    5115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.165943416s)
	I0729 16:37:21.379448    5115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 16:37:21.395189    5115 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 16:37:21.398565    5115 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 16:37:21.403613    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:21.467249    5115 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 16:37:23.146588    5115 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.679369834s)
	I0729 16:37:23.146679    5115 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 16:37:23.158595    5115 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 16:37:23.158603    5115 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 16:37:23.158609    5115 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 16:37:23.162726    5115 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:37:23.164297    5115 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:37:23.166107    5115 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:37:23.166182    5115 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:37:23.168012    5115 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:37:23.168020    5115 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:37:23.169321    5115 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:37:23.169470    5115 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:37:23.170976    5115 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:37:23.171003    5115 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:37:23.172078    5115 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 16:37:23.172151    5115 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:37:23.172980    5115 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:37:23.173041    5115 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:37:23.174653    5115 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 16:37:23.174653    5115 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:37:23.582186    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:37:23.594042    5115 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 16:37:23.594066    5115 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:37:23.594110    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 16:37:23.599678    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:37:23.601579    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:37:23.604219    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 16:37:23.611940    5115 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 16:37:23.611961    5115 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:37:23.612011    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 16:37:23.615224    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:37:23.617608    5115 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 16:37:23.617631    5115 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:37:23.617665    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 16:37:23.623544    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0729 16:37:23.624300    5115 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 16:37:23.624436    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:37:23.633117    5115 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 16:37:23.633140    5115 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:37:23.633202    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 16:37:23.633239    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 16:37:23.642478    5115 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 16:37:23.642498    5115 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:37:23.642551    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 16:37:23.643512    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 16:37:23.652924    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 16:37:23.653033    5115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:37:23.654759    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 16:37:23.655042    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 16:37:23.655464    5115 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 16:37:23.655474    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 16:37:23.677729    5115 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 16:37:23.677737    5115 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 16:37:23.677749    5115 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 16:37:23.677756    5115 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 16:37:23.677802    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 16:37:23.677802    5115 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 16:37:23.715483    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 16:37:23.715494    5115 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 16:37:23.715510    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 16:37:23.715489    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 16:37:23.715604    5115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 16:37:23.753264    5115 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 16:37:23.753299    5115 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 16:37:23.753324    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 16:37:23.761201    5115 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 16:37:23.761210    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 16:37:23.791931    5115 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0729 16:37:23.826628    5115 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 16:37:23.826749    5115 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:37:23.837592    5115 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 16:37:23.837612    5115 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:37:23.837664    5115 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:37:23.851710    5115 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 16:37:23.851826    5115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:37:23.853401    5115 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 16:37:23.853418    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 16:37:23.879821    5115 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 16:37:23.879834    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 16:37:24.111545    5115 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 16:37:24.111583    5115 cache_images.go:92] duration metric: took 952.995833ms to LoadCachedImages
	W0729 16:37:24.111626    5115 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 16:37:24.111630    5115 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 16:37:24.111684    5115 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-170000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 16:37:24.111742    5115 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 16:37:24.125044    5115 cni.go:84] Creating CNI manager for ""
	I0729 16:37:24.125055    5115 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:37:24.125059    5115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 16:37:24.125067    5115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-170000 NodeName:stopped-upgrade-170000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 16:37:24.125126    5115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-170000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 16:37:24.125729    5115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 16:37:24.128505    5115 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 16:37:24.128530    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 16:37:24.131188    5115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 16:37:24.135679    5115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 16:37:24.140342    5115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 16:37:24.145876    5115 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 16:37:24.147414    5115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 16:37:24.152059    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:37:24.213385    5115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:37:24.219426    5115 certs.go:68] Setting up /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000 for IP: 10.0.2.15
	I0729 16:37:24.219434    5115 certs.go:194] generating shared ca certs ...
	I0729 16:37:24.219442    5115 certs.go:226] acquiring lock for ca certs: {Name:mk96bd81121b57115fda9376f192a645eb60e2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:37:24.219613    5115 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.key
	I0729 16:37:24.219678    5115 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/proxy-client-ca.key
	I0729 16:37:24.219686    5115 certs.go:256] generating profile certs ...
	I0729 16:37:24.219760    5115 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/client.key
	I0729 16:37:24.219786    5115 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07
	I0729 16:37:24.219799    5115 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 16:37:24.362374    5115 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 ...
	I0729 16:37:24.362389    5115 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07: {Name:mk819e52ffaeecb246d86958415d95ac02b9c779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:37:24.362780    5115 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07 ...
	I0729 16:37:24.362789    5115 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07: {Name:mk821641a7dc4277bb039a7049a4ea3656f9a023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:37:24.362939    5115 certs.go:381] copying /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt.c425be07 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt
	I0729 16:37:24.363103    5115 certs.go:385] copying /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key.c425be07 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key
	I0729 16:37:24.363263    5115 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/proxy-client.key
	I0729 16:37:24.363406    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/1714.pem (1338 bytes)
	W0729 16:37:24.363438    5115 certs.go:480] ignoring /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/1714_empty.pem, impossibly tiny 0 bytes
	I0729 16:37:24.363444    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 16:37:24.363464    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem (1082 bytes)
	I0729 16:37:24.363488    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem (1123 bytes)
	I0729 16:37:24.363532    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/key.pem (1675 bytes)
	I0729 16:37:24.363592    5115 certs.go:484] found cert: /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem (1708 bytes)
	I0729 16:37:24.363915    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 16:37:24.371093    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 16:37:24.378463    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 16:37:24.385152    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 16:37:24.392032    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 16:37:24.399566    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 16:37:24.406702    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 16:37:24.413451    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 16:37:24.420003    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/ssl/certs/17142.pem --> /usr/share/ca-certificates/17142.pem (1708 bytes)
	I0729 16:37:24.427216    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 16:37:24.434179    5115 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/1714.pem --> /usr/share/ca-certificates/1714.pem (1338 bytes)
	I0729 16:37:24.440963    5115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 16:37:24.445855    5115 ssh_runner.go:195] Run: openssl version
	I0729 16:37:24.447668    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17142.pem && ln -fs /usr/share/ca-certificates/17142.pem /etc/ssl/certs/17142.pem"
	I0729 16:37:24.450824    5115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17142.pem
	I0729 16:37:24.452194    5115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 22:54 /usr/share/ca-certificates/17142.pem
	I0729 16:37:24.452211    5115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17142.pem
	I0729 16:37:24.453934    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17142.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 16:37:24.456610    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 16:37:24.460126    5115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:37:24.461559    5115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:37:24.461578    5115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 16:37:24.463237    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 16:37:24.466067    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1714.pem && ln -fs /usr/share/ca-certificates/1714.pem /etc/ssl/certs/1714.pem"
	I0729 16:37:24.468760    5115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1714.pem
	I0729 16:37:24.470116    5115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 22:54 /usr/share/ca-certificates/1714.pem
	I0729 16:37:24.470137    5115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1714.pem
	I0729 16:37:24.471789    5115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1714.pem /etc/ssl/certs/51391683.0"
	I0729 16:37:24.474923    5115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 16:37:24.476336    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 16:37:24.478084    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 16:37:24.479869    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 16:37:24.481932    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 16:37:24.483739    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 16:37:24.485411    5115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 16:37:24.487111    5115 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 16:37:24.487178    5115 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:37:24.497102    5115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 16:37:24.500324    5115 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 16:37:24.500330    5115 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 16:37:24.500353    5115 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 16:37:24.504243    5115 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 16:37:24.504556    5115 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-170000" does not appear in /Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:37:24.504651    5115 kubeconfig.go:62] /Users/jenkins/minikube-integration/19348-1218/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-170000" cluster setting kubeconfig missing "stopped-upgrade-170000" context setting]
	I0729 16:37:24.504858    5115 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/kubeconfig: {Name:mkadb977bd50641dea3f6c522a66ad62f461af12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:37:24.505296    5115 kapi.go:59] client config for stopped-upgrade-170000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/client.key", CAFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102460080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:37:24.505617    5115 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 16:37:24.508424    5115 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-170000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 16:37:24.508429    5115 kubeadm.go:1160] stopping kube-system containers ...
	I0729 16:37:24.508467    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 16:37:24.519202    5115 docker.go:483] Stopping containers: [992689aa0398 f5562f98bfc0 ae839b7e08bd a6704f01ea0d 713ebdc98434 bc11e1c032a5 8b58cefd71ff 0de0e91e43bd 23ae4cb25902]
	I0729 16:37:24.519268    5115 ssh_runner.go:195] Run: docker stop 992689aa0398 f5562f98bfc0 ae839b7e08bd a6704f01ea0d 713ebdc98434 bc11e1c032a5 8b58cefd71ff 0de0e91e43bd 23ae4cb25902
	I0729 16:37:24.529760    5115 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 16:37:24.535526    5115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:37:24.538685    5115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:37:24.538694    5115 kubeadm.go:157] found existing configuration files:
	
	I0729 16:37:24.538714    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0729 16:37:24.541445    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:37:24.541469    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:37:24.544143    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0729 16:37:24.547128    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:37:24.547149    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:37:24.550015    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0729 16:37:24.552531    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:37:24.552554    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:37:24.555694    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0729 16:37:24.558664    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:37:24.558683    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:37:24.561308    5115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:37:24.564193    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:37:24.586449    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:37:24.960404    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:37:25.069989    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:37:25.100808    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 16:37:25.127040    5115 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:37:25.127117    5115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:37:25.629300    5115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:37:26.128983    5115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:37:26.133081    5115 api_server.go:72] duration metric: took 1.006073667s to wait for apiserver process to appear ...
	I0729 16:37:26.133090    5115 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:37:26.133099    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:31.134385    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:31.134455    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:36.134892    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:36.134943    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:41.135208    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:41.135256    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:46.135553    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:46.135577    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:51.135862    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:51.135875    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:37:56.136331    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:37:56.136381    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:01.137159    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:01.137181    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:06.138005    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:06.138025    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:11.138538    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:11.138595    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:16.140123    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:16.140162    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:21.142119    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:21.142139    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:26.144271    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:26.144620    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:38:26.176685    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:38:26.176878    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:38:26.203741    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:38:26.203843    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:38:26.217664    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:38:26.217739    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:38:26.229631    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:38:26.229702    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:38:26.241155    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:38:26.241239    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:38:26.252215    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:38:26.252284    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:38:26.264451    5115 logs.go:276] 0 containers: []
	W0729 16:38:26.264466    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:38:26.264540    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:38:26.275781    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:38:26.275803    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:38:26.275808    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:38:26.293608    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:38:26.293619    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:38:26.307734    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:38:26.307748    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:38:26.319761    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:38:26.319777    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:38:26.331473    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:38:26.331484    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:38:26.369517    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:38:26.369527    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:38:26.381180    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:38:26.381201    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:38:26.398830    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:38:26.398841    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:38:26.412518    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:38:26.412529    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:38:26.423766    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:38:26.423778    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:38:26.448726    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:38:26.448734    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:38:26.524298    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:38:26.524310    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:38:26.536468    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:38:26.536482    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:38:26.558822    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:38:26.558839    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:38:26.570504    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:38:26.570516    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:38:26.574663    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:38:26.574673    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:38:26.615126    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:38:26.615137    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:38:29.131890    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:34.133165    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:34.133319    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:38:34.144557    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:38:34.144636    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:38:34.155666    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:38:34.155746    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:38:34.166233    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:38:34.166320    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:38:34.176744    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:38:34.176817    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:38:34.195388    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:38:34.195477    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:38:34.206195    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:38:34.206270    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:38:34.216466    5115 logs.go:276] 0 containers: []
	W0729 16:38:34.216477    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:38:34.216533    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:38:34.227712    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:38:34.227734    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:38:34.227739    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:38:34.232409    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:38:34.232417    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:38:34.269224    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:38:34.269236    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:38:34.283789    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:38:34.283800    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:38:34.301175    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:38:34.301186    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:38:34.337406    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:38:34.337415    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:38:34.375606    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:38:34.375621    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:38:34.400116    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:38:34.400127    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:38:34.417721    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:38:34.417731    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:38:34.430195    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:38:34.430206    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:38:34.442186    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:38:34.442202    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:38:34.456173    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:38:34.456184    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:38:34.468655    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:38:34.468667    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:38:34.481717    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:38:34.481732    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:38:34.493262    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:38:34.493286    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:38:34.504984    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:38:34.504994    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:38:34.530443    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:38:34.530452    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:38:37.043933    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:42.046075    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:42.046317    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:38:42.072499    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:38:42.072603    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:38:42.087670    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:38:42.087754    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:38:42.100092    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:38:42.100163    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:38:42.112639    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:38:42.112707    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:38:42.122782    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:38:42.122848    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:38:42.133370    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:38:42.133441    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:38:42.143430    5115 logs.go:276] 0 containers: []
	W0729 16:38:42.143441    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:38:42.143503    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:38:42.153324    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:38:42.153342    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:38:42.153347    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:38:42.197220    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:38:42.197236    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:38:42.212039    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:38:42.212050    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:38:42.223558    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:38:42.223571    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:38:42.238341    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:38:42.238352    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:38:42.256235    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:38:42.256246    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:38:42.281591    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:38:42.281599    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:38:42.295447    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:38:42.295459    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:38:42.330944    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:38:42.330957    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:38:42.346603    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:38:42.346619    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:38:42.369326    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:38:42.369340    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:38:42.374013    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:38:42.374021    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:38:42.386062    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:38:42.386078    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:38:42.397934    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:38:42.397945    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:38:42.409267    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:38:42.409278    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:38:42.446605    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:38:42.446620    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:38:42.460508    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:38:42.460524    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:38:44.974301    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:49.976485    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:49.976722    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:38:49.992957    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:38:49.993039    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:38:50.006217    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:38:50.006290    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:38:50.017128    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:38:50.017197    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:38:50.027712    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:38:50.027789    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:38:50.038278    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:38:50.038355    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:38:50.055775    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:38:50.055844    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:38:50.066064    5115 logs.go:276] 0 containers: []
	W0729 16:38:50.066074    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:38:50.066133    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:38:50.076859    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:38:50.076876    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:38:50.076881    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:38:50.088261    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:38:50.088272    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:38:50.112686    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:38:50.112698    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:38:50.151915    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:38:50.151927    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:38:50.167001    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:38:50.167014    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:38:50.186118    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:38:50.186128    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:38:50.198618    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:38:50.198630    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:38:50.215397    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:38:50.215408    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:38:50.254031    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:38:50.254045    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:38:50.291609    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:38:50.291622    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:38:50.312399    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:38:50.312410    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:38:50.330585    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:38:50.330595    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:38:50.341993    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:38:50.342008    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:38:50.353532    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:38:50.353543    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:38:50.358036    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:38:50.358044    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:38:50.372035    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:38:50.372046    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:38:50.383684    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:38:50.383695    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:38:52.897492    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:38:57.899667    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:38:57.899898    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:38:57.918870    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:38:57.918971    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:38:57.934602    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:38:57.934682    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:38:57.946145    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:38:57.946216    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:38:57.956937    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:38:57.957007    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:38:57.967605    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:38:57.967669    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:38:57.978704    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:38:57.978775    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:38:57.988953    5115 logs.go:276] 0 containers: []
	W0729 16:38:57.988968    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:38:57.989030    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:38:57.999774    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:38:57.999795    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:38:57.999801    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:38:58.035184    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:38:58.035200    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:38:58.048422    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:38:58.048434    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:38:58.072742    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:38:58.072753    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:38:58.087652    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:38:58.087667    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:38:58.104936    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:38:58.104947    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:38:58.116807    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:38:58.116818    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:38:58.155110    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:38:58.155117    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:38:58.159218    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:38:58.159227    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:38:58.173179    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:38:58.173191    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:38:58.211026    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:38:58.211040    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:38:58.225288    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:38:58.225298    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:38:58.237179    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:38:58.237191    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:38:58.248984    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:38:58.248995    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:38:58.270266    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:38:58.270278    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:38:58.281580    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:38:58.281596    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:38:58.293004    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:38:58.293016    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:00.806910    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:05.809052    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:05.809260    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:05.830099    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:05.830203    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:05.844675    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:05.844749    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:05.857216    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:05.857285    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:05.867596    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:05.867666    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:05.883078    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:05.883164    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:05.899077    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:05.899151    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:05.908942    5115 logs.go:276] 0 containers: []
	W0729 16:39:05.908953    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:05.909003    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:05.927787    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:05.927805    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:05.927810    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:05.962398    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:05.962413    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:05.973794    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:05.973805    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:05.998168    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:05.998178    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:06.012613    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:06.012625    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:06.024503    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:06.024514    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:06.044924    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:06.044936    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:06.080879    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:06.080887    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:06.119914    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:06.119926    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:06.141858    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:06.141870    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:06.154215    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:06.154227    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:06.176319    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:06.176331    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:06.190912    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:06.190923    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:06.206515    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:06.206527    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:06.211250    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:06.211257    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:06.225212    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:06.225223    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:06.238926    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:06.238937    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:08.752848    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:13.755055    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:13.755175    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:13.768843    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:13.768921    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:13.780359    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:13.780439    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:13.791259    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:13.791327    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:13.801980    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:13.802044    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:13.812435    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:13.812496    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:13.823135    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:13.823203    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:13.833874    5115 logs.go:276] 0 containers: []
	W0729 16:39:13.833889    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:13.833946    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:13.845544    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:13.845562    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:13.845567    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:13.886101    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:13.886114    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:13.901991    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:13.902002    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:13.914086    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:13.914097    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:13.926319    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:13.926331    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:13.945137    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:13.945154    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:13.957522    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:13.957532    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:13.994052    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:13.994063    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:13.998691    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:13.998698    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:14.041211    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:14.041221    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:14.055046    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:14.055056    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:14.066362    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:14.066374    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:14.083506    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:14.083517    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:14.108467    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:14.108473    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:14.120463    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:14.120474    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:14.141183    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:14.141196    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:14.160980    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:14.160991    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:16.684452    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:21.686562    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:21.686732    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:21.704071    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:21.704163    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:21.717182    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:21.717254    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:21.728155    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:21.728220    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:21.740583    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:21.740660    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:21.754758    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:21.754838    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:21.765827    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:21.765899    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:21.776385    5115 logs.go:276] 0 containers: []
	W0729 16:39:21.776397    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:21.776457    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:21.787346    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:21.787366    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:21.787372    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:21.805096    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:21.805110    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:21.816193    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:21.816202    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:21.830468    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:21.830481    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:21.842745    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:21.842754    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:21.854373    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:21.854383    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:21.892289    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:21.892296    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:21.906383    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:21.906397    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:21.917760    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:21.917769    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:21.942660    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:21.942676    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:21.954591    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:21.954604    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:21.992153    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:21.992167    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:22.029576    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:22.029593    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:22.043764    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:22.043774    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:22.054818    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:22.054827    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:22.066463    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:22.066473    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:22.091420    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:22.091427    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:24.597255    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:29.599427    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:29.599584    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:29.617489    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:29.617568    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:29.628287    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:29.628376    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:29.639202    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:29.639271    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:29.650101    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:29.650174    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:29.660968    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:29.661042    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:29.671858    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:29.671927    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:29.686562    5115 logs.go:276] 0 containers: []
	W0729 16:39:29.686580    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:29.686637    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:29.696956    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:29.696976    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:29.696981    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:29.733109    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:29.733122    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:29.768225    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:29.768238    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:29.806067    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:29.806078    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:29.822216    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:29.822230    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:29.838654    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:29.838668    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:29.850242    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:29.850254    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:29.854834    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:29.854840    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:29.866166    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:29.866178    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:29.883096    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:29.883112    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:29.894479    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:29.894492    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:29.907413    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:29.907423    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:29.921311    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:29.921326    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:29.932980    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:29.932993    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:29.954406    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:29.954416    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:29.966225    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:29.966237    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:29.990331    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:29.990341    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:32.505026    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:37.507260    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:37.507484    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:37.534322    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:37.534452    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:37.551758    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:37.551832    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:37.565415    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:37.565487    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:37.576367    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:37.576433    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:37.586872    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:37.586939    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:37.597423    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:37.597490    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:37.610432    5115 logs.go:276] 0 containers: []
	W0729 16:39:37.610443    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:37.610495    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:37.621320    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:37.621340    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:37.621346    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:37.635464    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:37.635476    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:37.647142    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:37.647153    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:37.672143    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:37.672150    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:37.688902    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:37.688913    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:37.706253    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:37.706265    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:37.727669    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:37.727679    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:37.741451    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:37.741460    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:37.753485    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:37.753499    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:37.774064    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:37.774084    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:37.786205    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:37.786222    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:37.799003    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:37.799017    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:37.804040    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:37.804051    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:37.853119    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:37.853135    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:37.894072    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:37.894083    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:37.924754    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:37.924763    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:37.967491    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:37.967509    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:40.490709    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:45.491097    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:45.491223    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:45.503194    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:45.503278    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:45.513917    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:45.513997    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:45.524841    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:45.524906    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:45.534975    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:45.535046    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:45.545691    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:45.545762    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:45.556097    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:45.556162    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:45.566566    5115 logs.go:276] 0 containers: []
	W0729 16:39:45.566578    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:45.566640    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:45.576984    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:45.577000    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:45.577005    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:45.614853    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:45.614865    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:45.629190    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:45.629203    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:45.640545    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:45.640555    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:45.651457    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:45.651469    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:45.669631    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:45.669647    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:45.682787    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:45.682800    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:45.687711    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:45.687720    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:45.725127    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:45.725141    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:45.739997    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:45.740005    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:45.752721    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:45.752731    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:45.765471    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:45.765483    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:45.804563    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:45.804573    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:45.823179    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:45.823194    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:45.845671    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:45.845680    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:45.859022    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:45.859033    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:45.871823    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:45.871836    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:48.403913    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:39:53.406182    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:39:53.406336    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:39:53.421683    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:39:53.421761    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:39:53.438135    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:39:53.438198    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:39:53.448543    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:39:53.448614    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:39:53.459356    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:39:53.459430    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:39:53.469868    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:39:53.469934    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:39:53.480244    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:39:53.480310    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:39:53.492442    5115 logs.go:276] 0 containers: []
	W0729 16:39:53.492456    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:39:53.492520    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:39:53.503662    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:39:53.503681    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:39:53.503687    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:39:53.516046    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:39:53.516060    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:39:53.530246    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:39:53.530256    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:39:53.542465    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:39:53.542473    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:39:53.582033    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:39:53.582046    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:39:53.596827    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:39:53.596844    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:39:53.608907    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:39:53.608921    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:39:53.651222    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:39:53.651235    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:39:53.691600    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:39:53.691615    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:39:53.704387    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:39:53.704396    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:39:53.732589    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:39:53.732607    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:39:53.751603    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:39:53.751619    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:39:53.777837    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:39:53.777855    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:39:53.792737    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:39:53.792752    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:39:53.809571    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:39:53.809587    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:39:53.821900    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:39:53.821913    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:39:53.825852    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:39:53.825858    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:39:56.340012    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:01.340977    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:01.341184    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:01.354621    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:01.354696    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:01.367284    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:01.367354    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:01.378821    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:01.378892    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:01.390617    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:01.390689    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:01.402293    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:01.402361    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:01.414093    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:01.414157    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:01.425286    5115 logs.go:276] 0 containers: []
	W0729 16:40:01.425297    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:01.425357    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:01.436613    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:01.436630    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:01.436636    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:01.473161    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:01.473174    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:01.488459    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:01.488475    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:01.500763    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:01.500780    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:01.513499    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:01.513512    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:01.525893    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:01.525907    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:01.548335    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:01.548345    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:01.572971    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:01.572985    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:01.613624    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:01.613638    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:01.627054    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:01.627069    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:01.639532    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:01.639544    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:01.644516    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:01.644523    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:01.660327    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:01.660338    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:01.699848    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:01.699862    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:01.713827    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:01.713836    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:01.725436    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:01.725446    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:01.736979    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:01.736990    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:04.260828    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:09.262920    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:09.262995    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:09.274585    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:09.274656    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:09.285989    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:09.286058    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:09.297682    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:09.297756    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:09.308839    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:09.308911    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:09.319814    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:09.319887    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:09.331289    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:09.331355    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:09.342999    5115 logs.go:276] 0 containers: []
	W0729 16:40:09.343015    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:09.343078    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:09.354865    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:09.354887    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:09.354893    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:09.399967    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:09.399997    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:09.415445    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:09.415460    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:09.427166    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:09.427179    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:09.449260    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:09.449270    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:09.464159    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:09.464176    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:09.475935    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:09.475948    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:09.499895    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:09.499913    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:09.504441    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:09.504449    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:09.520858    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:09.520872    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:09.533254    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:09.533269    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:09.545070    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:09.545081    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:09.562830    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:09.562842    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:09.574220    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:09.574234    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:09.586482    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:09.586491    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:09.625101    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:09.625115    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:09.637998    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:09.638008    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:12.174262    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:17.176341    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:17.176455    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:17.187616    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:17.187689    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:17.200941    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:17.201016    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:17.213477    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:17.213549    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:17.225174    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:17.225254    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:17.236578    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:17.236650    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:17.248031    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:17.248104    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:17.258758    5115 logs.go:276] 0 containers: []
	W0729 16:40:17.258790    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:17.258856    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:17.270438    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:17.270455    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:17.270463    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:17.311197    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:17.311212    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:17.326775    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:17.326789    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:17.349717    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:17.349731    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:17.368137    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:17.368153    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:17.379877    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:17.379889    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:17.403788    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:17.403798    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:17.416334    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:17.416346    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:17.420705    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:17.420713    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:17.432302    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:17.432314    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:17.443791    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:17.443803    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:17.457403    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:17.457415    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:17.471128    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:17.471138    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:17.508702    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:17.508714    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:17.520512    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:17.520523    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:17.532440    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:17.532451    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:17.543868    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:17.543877    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:20.082668    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:25.084818    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:25.084910    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:25.096326    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:25.096409    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:25.108024    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:25.108101    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:25.119060    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:25.119124    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:25.130932    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:25.131000    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:25.142452    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:25.142533    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:25.155415    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:25.155483    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:25.166525    5115 logs.go:276] 0 containers: []
	W0729 16:40:25.166541    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:25.166606    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:25.188579    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:25.188599    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:25.188604    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:25.229985    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:25.230001    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:25.266874    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:25.266887    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:25.280029    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:25.280045    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:25.291490    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:25.291502    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:25.303570    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:25.303582    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:25.307759    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:25.307766    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:25.319743    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:25.319754    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:25.332401    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:25.332413    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:25.346733    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:25.346743    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:25.358211    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:25.358222    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:25.377196    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:25.377206    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:25.391311    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:25.391324    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:25.429993    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:25.430006    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:25.444047    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:25.444060    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:25.465269    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:25.465283    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:25.480684    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:25.480695    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:28.005198    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:33.007316    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:33.007413    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:33.018635    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:33.018745    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:33.030101    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:33.030175    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:33.041619    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:33.041688    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:33.054328    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:33.054399    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:33.065647    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:33.065724    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:33.077161    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:33.077242    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:33.088196    5115 logs.go:276] 0 containers: []
	W0729 16:40:33.088206    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:33.088269    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:33.104787    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:33.104805    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:33.104811    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:33.125791    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:33.125802    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:33.138654    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:33.138666    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:33.162003    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:33.162015    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:33.176560    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:33.176575    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:33.195394    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:33.195406    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:33.206679    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:33.206691    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:33.217804    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:33.217816    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:33.229878    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:33.229890    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:33.241725    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:33.241737    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:33.259467    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:33.259480    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:33.295835    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:33.295844    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:33.299832    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:33.299844    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:33.335112    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:33.335123    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:33.349754    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:33.349768    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:33.387193    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:33.387204    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:33.401786    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:33.401799    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:35.915297    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:40.917526    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:40.917611    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:40.930480    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:40.930559    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:40.942360    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:40.942433    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:40.955064    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:40.955135    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:40.965765    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:40.965838    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:40.976461    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:40.976524    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:40.987361    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:40.987439    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:40.997807    5115 logs.go:276] 0 containers: []
	W0729 16:40:40.997817    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:40.997872    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:41.008494    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:41.008517    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:41.008524    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:41.012767    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:41.012776    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:41.024910    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:41.024924    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:41.037212    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:41.037226    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:41.073302    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:41.073310    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:41.108387    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:41.108397    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:41.125561    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:41.125576    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:41.141326    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:41.141336    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:41.152126    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:41.152138    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:41.166131    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:41.166144    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:41.206792    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:41.206802    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:41.222334    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:41.222348    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:41.243539    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:41.243552    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:41.255693    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:41.255711    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:41.270004    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:41.270016    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:41.281368    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:41.281378    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:41.300922    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:41.300932    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:43.827164    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:48.829273    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:48.829355    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:48.840273    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:48.840342    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:48.850949    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:48.851015    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:48.864953    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:48.865022    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:48.875955    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:48.876030    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:48.888644    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:48.888713    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:48.899078    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:48.899152    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:48.909450    5115 logs.go:276] 0 containers: []
	W0729 16:40:48.909463    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:48.909527    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:48.920089    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:48.920108    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:48.920114    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:48.934012    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:48.934022    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:48.946323    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:48.946333    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:48.958841    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:48.958853    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:48.962995    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:48.963003    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:49.001047    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:49.001058    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:49.013405    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:49.013418    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:49.036916    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:49.036928    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:49.048215    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:49.048227    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:49.071521    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:49.071532    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:49.085223    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:49.085232    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:49.120020    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:49.120031    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:49.134551    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:49.134561    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:49.147844    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:49.147855    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:49.185368    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:49.185377    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:49.198217    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:49.198226    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:49.216104    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:49.216116    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:51.742159    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:40:56.744336    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:40:56.744421    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:40:56.755092    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:40:56.755162    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:40:56.765314    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:40:56.765382    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:40:56.775801    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:40:56.775876    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:40:56.786260    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:40:56.786335    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:40:56.797145    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:40:56.797210    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:40:56.808170    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:40:56.808247    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:40:56.818366    5115 logs.go:276] 0 containers: []
	W0729 16:40:56.818378    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:40:56.818437    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:40:56.829497    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:40:56.829516    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:40:56.829522    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:40:56.840893    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:40:56.840904    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:40:56.863765    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:40:56.863775    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:40:56.898844    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:40:56.898851    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:40:56.902716    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:40:56.902725    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:40:56.924038    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:40:56.924050    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:40:56.935374    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:40:56.935389    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:40:56.946298    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:40:56.946309    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:40:56.982958    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:40:56.982972    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:40:56.997590    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:40:56.997600    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:40:57.013830    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:40:57.013845    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:40:57.026734    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:40:57.026748    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:40:57.042458    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:40:57.042473    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:40:57.060717    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:40:57.060729    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:40:57.072861    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:40:57.072870    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:40:57.110018    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:40:57.110030    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:40:57.123617    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:40:57.123627    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:40:59.637146    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:04.639365    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:04.639526    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:04.650059    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:41:04.650125    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:04.660611    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:41:04.660678    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:04.671103    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:41:04.671166    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:04.681534    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:41:04.681606    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:04.691756    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:41:04.691820    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:04.702047    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:41:04.702109    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:04.712105    5115 logs.go:276] 0 containers: []
	W0729 16:41:04.712118    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:04.712179    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:04.722614    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:41:04.722634    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:04.722641    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:04.726899    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:04.726912    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:04.761656    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:41:04.761668    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:41:04.776269    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:41:04.776281    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:41:04.787645    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:41:04.787657    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:41:04.799174    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:41:04.799185    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:41:04.836941    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:41:04.836954    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:41:04.850747    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:41:04.850757    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:41:04.872043    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:41:04.872054    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:41:04.886183    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:41:04.886194    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:41:04.897880    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:41:04.897892    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:41:04.909979    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:04.909991    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:04.933658    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:41:04.933669    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:04.945395    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:04.945407    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:04.983650    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:41:04.983664    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:41:05.005634    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:41:05.005646    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:41:05.017684    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:41:05.017694    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:41:07.531242    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:12.533384    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:12.533481    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:12.545187    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:41:12.545259    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:12.555386    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:41:12.555455    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:12.566176    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:41:12.566253    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:12.579573    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:41:12.579645    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:12.589755    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:41:12.589822    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:12.600705    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:41:12.600783    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:12.610698    5115 logs.go:276] 0 containers: []
	W0729 16:41:12.610709    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:12.610769    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:12.621488    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:41:12.621505    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:41:12.621511    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:41:12.639589    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:41:12.639598    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:41:12.653786    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:41:12.653797    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:41:12.665715    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:12.665728    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:12.689128    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:41:12.689136    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:12.700651    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:41:12.700664    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:41:12.722694    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:41:12.722707    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:41:12.734975    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:41:12.734986    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:41:12.772794    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:41:12.772813    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:41:12.785066    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:41:12.785077    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:41:12.797336    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:41:12.797347    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:41:12.811743    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:12.811755    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:12.816410    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:12.816416    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:12.851147    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:41:12.851161    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:41:12.869727    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:41:12.869739    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:41:12.892076    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:41:12.892091    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:41:12.907756    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:12.907769    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:15.446069    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:20.448169    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:20.448280    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:41:20.459865    5115 logs.go:276] 2 containers: [b53bd3d67821 f5562f98bfc0]
	I0729 16:41:20.459950    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:41:20.475989    5115 logs.go:276] 2 containers: [460653d328d9 bc11e1c032a5]
	I0729 16:41:20.476068    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:41:20.487156    5115 logs.go:276] 1 containers: [8c74f5cdb8b5]
	I0729 16:41:20.487227    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:41:20.497810    5115 logs.go:276] 2 containers: [39332025bba2 713ebdc98434]
	I0729 16:41:20.497883    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:41:20.508101    5115 logs.go:276] 1 containers: [8147144663d3]
	I0729 16:41:20.508172    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:41:20.518901    5115 logs.go:276] 2 containers: [66efbbbe368e 992689aa0398]
	I0729 16:41:20.518972    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:41:20.529449    5115 logs.go:276] 0 containers: []
	W0729 16:41:20.529463    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:41:20.529522    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:41:20.539788    5115 logs.go:276] 2 containers: [bde71cf45b08 e2392a1e3d6b]
	I0729 16:41:20.539808    5115 logs.go:123] Gathering logs for kube-proxy [8147144663d3] ...
	I0729 16:41:20.539814    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8147144663d3"
	I0729 16:41:20.551702    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:41:20.551712    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:41:20.564854    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:41:20.564865    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:41:20.601647    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:41:20.601658    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:41:20.636271    5115 logs.go:123] Gathering logs for kube-controller-manager [66efbbbe368e] ...
	I0729 16:41:20.636286    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66efbbbe368e"
	I0729 16:41:20.658196    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:41:20.658208    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:41:20.681367    5115 logs.go:123] Gathering logs for kube-apiserver [b53bd3d67821] ...
	I0729 16:41:20.681375    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b53bd3d67821"
	I0729 16:41:20.695212    5115 logs.go:123] Gathering logs for coredns [8c74f5cdb8b5] ...
	I0729 16:41:20.695223    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c74f5cdb8b5"
	I0729 16:41:20.706438    5115 logs.go:123] Gathering logs for storage-provisioner [bde71cf45b08] ...
	I0729 16:41:20.706450    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bde71cf45b08"
	I0729 16:41:20.717576    5115 logs.go:123] Gathering logs for kube-scheduler [713ebdc98434] ...
	I0729 16:41:20.717586    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 713ebdc98434"
	I0729 16:41:20.739137    5115 logs.go:123] Gathering logs for kube-controller-manager [992689aa0398] ...
	I0729 16:41:20.739149    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992689aa0398"
	I0729 16:41:20.751153    5115 logs.go:123] Gathering logs for etcd [460653d328d9] ...
	I0729 16:41:20.751164    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 460653d328d9"
	I0729 16:41:20.764521    5115 logs.go:123] Gathering logs for etcd [bc11e1c032a5] ...
	I0729 16:41:20.764535    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc11e1c032a5"
	I0729 16:41:20.778803    5115 logs.go:123] Gathering logs for kube-scheduler [39332025bba2] ...
	I0729 16:41:20.778816    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39332025bba2"
	I0729 16:41:20.790594    5115 logs.go:123] Gathering logs for storage-provisioner [e2392a1e3d6b] ...
	I0729 16:41:20.790605    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2392a1e3d6b"
	I0729 16:41:20.801615    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:41:20.801626    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:41:20.805625    5115 logs.go:123] Gathering logs for kube-apiserver [f5562f98bfc0] ...
	I0729 16:41:20.805631    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5562f98bfc0"
	I0729 16:41:23.345186    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:28.347318    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:28.347350    5115 kubeadm.go:597] duration metric: took 4m3.854377s to restartPrimaryControlPlane
	W0729 16:41:28.347378    5115 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 16:41:28.347394    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 16:41:29.303577    5115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 16:41:29.309013    5115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 16:41:29.311954    5115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 16:41:29.314868    5115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 16:41:29.314875    5115 kubeadm.go:157] found existing configuration files:
	
	I0729 16:41:29.314900    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0729 16:41:29.317308    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 16:41:29.317331    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 16:41:29.319710    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0729 16:41:29.322684    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 16:41:29.322707    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 16:41:29.325347    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0729 16:41:29.327830    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 16:41:29.327853    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 16:41:29.330861    5115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0729 16:41:29.333223    5115 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 16:41:29.333242    5115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 16:41:29.335914    5115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 16:41:29.353412    5115 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 16:41:29.353448    5115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 16:41:29.404968    5115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 16:41:29.405031    5115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 16:41:29.405084    5115 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 16:41:29.452487    5115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 16:41:29.456679    5115 out.go:204]   - Generating certificates and keys ...
	I0729 16:41:29.456718    5115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 16:41:29.456752    5115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 16:41:29.456791    5115 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 16:41:29.456829    5115 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 16:41:29.456860    5115 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 16:41:29.456890    5115 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 16:41:29.456926    5115 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 16:41:29.456957    5115 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 16:41:29.456991    5115 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 16:41:29.457030    5115 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 16:41:29.457053    5115 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 16:41:29.457079    5115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 16:41:29.755027    5115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 16:41:29.821682    5115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 16:41:29.924030    5115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 16:41:30.083949    5115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 16:41:30.111991    5115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 16:41:30.112307    5115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 16:41:30.112329    5115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 16:41:30.180887    5115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 16:41:30.184121    5115 out.go:204]   - Booting up control plane ...
	I0729 16:41:30.184168    5115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 16:41:30.184207    5115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 16:41:30.184249    5115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 16:41:30.184294    5115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 16:41:30.184404    5115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 16:41:34.687258    5115 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502680 seconds
	I0729 16:41:34.687355    5115 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 16:41:34.691994    5115 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 16:41:35.222193    5115 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 16:41:35.222485    5115 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-170000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 16:41:35.726538    5115 kubeadm.go:310] [bootstrap-token] Using token: l2hdh8.ozx7sr07436dbjkf
	I0729 16:41:35.733014    5115 out.go:204]   - Configuring RBAC rules ...
	I0729 16:41:35.733074    5115 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 16:41:35.733114    5115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 16:41:35.740337    5115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 16:41:35.741920    5115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 16:41:35.742826    5115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 16:41:35.744092    5115 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 16:41:35.748960    5115 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 16:41:35.912058    5115 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 16:41:36.133437    5115 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 16:41:36.134160    5115 kubeadm.go:310] 
	I0729 16:41:36.134196    5115 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 16:41:36.134199    5115 kubeadm.go:310] 
	I0729 16:41:36.134250    5115 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 16:41:36.134255    5115 kubeadm.go:310] 
	I0729 16:41:36.134272    5115 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 16:41:36.134316    5115 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 16:41:36.134348    5115 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 16:41:36.134353    5115 kubeadm.go:310] 
	I0729 16:41:36.134382    5115 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 16:41:36.134386    5115 kubeadm.go:310] 
	I0729 16:41:36.134409    5115 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 16:41:36.134413    5115 kubeadm.go:310] 
	I0729 16:41:36.134441    5115 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 16:41:36.134481    5115 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 16:41:36.134524    5115 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 16:41:36.134528    5115 kubeadm.go:310] 
	I0729 16:41:36.134584    5115 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 16:41:36.134619    5115 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 16:41:36.134623    5115 kubeadm.go:310] 
	I0729 16:41:36.134675    5115 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l2hdh8.ozx7sr07436dbjkf \
	I0729 16:41:36.134735    5115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9cecc1c3dd985258772234c33c785f9bcad6eff884cc7ff19b79a518c1cf4e1 \
	I0729 16:41:36.134747    5115 kubeadm.go:310] 	--control-plane 
	I0729 16:41:36.134751    5115 kubeadm.go:310] 
	I0729 16:41:36.134796    5115 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 16:41:36.134802    5115 kubeadm.go:310] 
	I0729 16:41:36.134844    5115 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l2hdh8.ozx7sr07436dbjkf \
	I0729 16:41:36.134894    5115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9cecc1c3dd985258772234c33c785f9bcad6eff884cc7ff19b79a518c1cf4e1 
	I0729 16:41:36.135138    5115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 16:41:36.135147    5115 cni.go:84] Creating CNI manager for ""
	I0729 16:41:36.135156    5115 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:41:36.138045    5115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 16:41:36.144996    5115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 16:41:36.147997    5115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 16:41:36.152749    5115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 16:41:36.152793    5115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 16:41:36.152831    5115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-170000 minikube.k8s.io/updated_at=2024_07_29T16_41_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9 minikube.k8s.io/name=stopped-upgrade-170000 minikube.k8s.io/primary=true
	I0729 16:41:36.194923    5115 ops.go:34] apiserver oom_adj: -16
	I0729 16:41:36.194959    5115 kubeadm.go:1113] duration metric: took 42.205958ms to wait for elevateKubeSystemPrivileges
	I0729 16:41:36.195011    5115 kubeadm.go:394] duration metric: took 4m11.715501833s to StartCluster
	I0729 16:41:36.195024    5115 settings.go:142] acquiring lock: {Name:mk1df9c174f764d47de5a2c25ea0f0fc28c1d98c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:41:36.195117    5115 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:41:36.195560    5115 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/kubeconfig: {Name:mkadb977bd50641dea3f6c522a66ad62f461af12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:41:36.195741    5115 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:41:36.195780    5115 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 16:41:36.195816    5115 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-170000"
	I0729 16:41:36.195829    5115 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-170000"
	W0729 16:41:36.195832    5115 addons.go:243] addon storage-provisioner should already be in state true
	I0729 16:41:36.195844    5115 host.go:66] Checking if "stopped-upgrade-170000" exists ...
	I0729 16:41:36.195845    5115 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:41:36.195845    5115 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-170000"
	I0729 16:41:36.195872    5115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-170000"
	I0729 16:41:36.196807    5115 kapi.go:59] client config for stopped-upgrade-170000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/stopped-upgrade-170000/client.key", CAFile:"/Users/jenkins/minikube-integration/19348-1218/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102460080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 16:41:36.196935    5115 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-170000"
	W0729 16:41:36.196939    5115 addons.go:243] addon default-storageclass should already be in state true
	I0729 16:41:36.196947    5115 host.go:66] Checking if "stopped-upgrade-170000" exists ...
	I0729 16:41:36.200017    5115 out.go:177] * Verifying Kubernetes components...
	I0729 16:41:36.200320    5115 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 16:41:36.204193    5115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 16:41:36.204200    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0729 16:41:36.207956    5115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 16:41:36.212014    5115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 16:41:36.214948    5115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:41:36.214954    5115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 16:41:36.214959    5115 sshutil.go:53] new ssh client: &{IP:localhost Port:50469 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/stopped-upgrade-170000/id_rsa Username:docker}
	I0729 16:41:36.285041    5115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 16:41:36.292110    5115 api_server.go:52] waiting for apiserver process to appear ...
	I0729 16:41:36.292165    5115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 16:41:36.292876    5115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 16:41:36.296713    5115 api_server.go:72] duration metric: took 100.963708ms to wait for apiserver process to appear ...
	I0729 16:41:36.296722    5115 api_server.go:88] waiting for apiserver healthz status ...
	I0729 16:41:36.296729    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:36.330057    5115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 16:41:41.298722    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:41.298756    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:46.298949    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:46.298987    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:51.299331    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:51.299352    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:41:56.299674    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:41:56.299699    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:01.299989    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:01.300014    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:06.300576    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:06.300604    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 16:42:06.701309    5115 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 16:42:06.705600    5115 out.go:177] * Enabled addons: storage-provisioner
	I0729 16:42:06.712459    5115 addons.go:510] duration metric: took 30.517603666s for enable addons: enabled=[storage-provisioner]
	I0729 16:42:11.301374    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:11.301418    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:16.302578    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:16.302599    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:21.303955    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:21.303992    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:26.306123    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:26.306165    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:31.308242    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:31.308270    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:36.310342    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:36.310433    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:42:36.322414    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:42:36.322489    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:42:36.332712    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:42:36.332779    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:42:36.343092    5115 logs.go:276] 2 containers: [99a859473d38 de37d84523f1]
	I0729 16:42:36.343166    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:42:36.354209    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:42:36.354278    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:42:36.365056    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:42:36.365128    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:42:36.375490    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:42:36.375563    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:42:36.385653    5115 logs.go:276] 0 containers: []
	W0729 16:42:36.385667    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:42:36.385730    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:42:36.396660    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:42:36.396677    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:42:36.396682    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:42:36.409301    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:42:36.409311    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:42:36.434168    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:42:36.434178    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:42:36.438533    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:42:36.438540    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:42:36.475673    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:42:36.475683    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:42:36.490607    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:42:36.490619    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:42:36.510716    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:42:36.510731    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:42:36.522746    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:42:36.522757    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:42:36.541096    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:42:36.541107    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:42:36.576111    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:42:36.576122    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:42:36.590677    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:42:36.590688    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:42:36.611962    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:42:36.611974    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:42:36.624113    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:42:36.624125    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:42:39.137471    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:44.139868    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:44.140308    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:42:44.178650    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:42:44.178773    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:42:44.200325    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:42:44.200440    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:42:44.215289    5115 logs.go:276] 2 containers: [99a859473d38 de37d84523f1]
	I0729 16:42:44.215357    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:42:44.227630    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:42:44.227696    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:42:44.238143    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:42:44.238209    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:42:44.249095    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:42:44.249161    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:42:44.259153    5115 logs.go:276] 0 containers: []
	W0729 16:42:44.259162    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:42:44.259211    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:42:44.269206    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:42:44.269224    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:42:44.269229    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:42:44.301826    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:42:44.301834    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:42:44.305910    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:42:44.305916    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:42:44.321311    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:42:44.321325    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:42:44.332872    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:42:44.332886    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:42:44.350693    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:42:44.350709    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:42:44.362646    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:42:44.362664    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:42:44.386062    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:42:44.386072    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:42:44.397303    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:42:44.397314    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:42:44.431844    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:42:44.431855    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:42:44.446035    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:42:44.446046    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:42:44.457983    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:42:44.457996    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:42:44.475030    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:42:44.475042    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:42:46.992418    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:51.994233    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:51.994673    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:42:52.033651    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:42:52.033783    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:42:52.053515    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:42:52.053588    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:42:52.070224    5115 logs.go:276] 2 containers: [99a859473d38 de37d84523f1]
	I0729 16:42:52.070308    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:42:52.082024    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:42:52.082084    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:42:52.092853    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:42:52.092914    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:42:52.103775    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:42:52.103834    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:42:52.114462    5115 logs.go:276] 0 containers: []
	W0729 16:42:52.114471    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:42:52.114517    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:42:52.125395    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:42:52.125411    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:42:52.125417    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:42:52.158924    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:42:52.158938    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:42:52.174239    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:42:52.174253    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:42:52.189774    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:42:52.189787    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:42:52.207184    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:42:52.207196    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:42:52.218356    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:42:52.218370    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:42:52.241365    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:42:52.241374    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:42:52.252404    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:42:52.252417    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:42:52.285057    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:42:52.285075    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:42:52.289314    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:42:52.289319    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:42:52.304340    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:42:52.304350    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:42:52.318117    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:42:52.318129    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:42:52.331247    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:42:52.331260    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:42:54.848718    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:42:59.849711    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:42:59.849838    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:42:59.864797    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:42:59.864867    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:42:59.878037    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:42:59.878107    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:42:59.890044    5115 logs.go:276] 2 containers: [99a859473d38 de37d84523f1]
	I0729 16:42:59.890105    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:42:59.899911    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:42:59.899967    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:42:59.910480    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:42:59.910542    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:42:59.920518    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:42:59.920588    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:42:59.930488    5115 logs.go:276] 0 containers: []
	W0729 16:42:59.930500    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:42:59.930558    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:42:59.940808    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:42:59.940824    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:42:59.940829    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:42:59.954798    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:42:59.954811    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:42:59.968788    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:42:59.968801    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:42:59.982660    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:42:59.982672    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:42:59.995589    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:42:59.995601    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:43:00.016869    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:43:00.016882    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:43:00.028735    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:43:00.028750    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:43:00.040403    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:43:00.040415    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:43:00.078432    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:43:00.078446    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:43:00.089605    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:43:00.089619    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:43:00.114251    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:43:00.114259    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:43:00.118143    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:43:00.118151    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:43:00.135102    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:43:00.135113    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:43:02.669928    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:07.672249    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:07.672686    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:43:07.712238    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:43:07.712366    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:43:07.733253    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:43:07.733369    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:43:07.748411    5115 logs.go:276] 2 containers: [99a859473d38 de37d84523f1]
	I0729 16:43:07.748488    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:43:07.760916    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:43:07.760999    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:43:07.772457    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:43:07.772524    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:43:07.783134    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:43:07.783195    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:43:07.793328    5115 logs.go:276] 0 containers: []
	W0729 16:43:07.793342    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:43:07.793401    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:43:07.804131    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:43:07.804148    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:43:07.804152    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:43:07.838405    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:43:07.838419    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:43:07.852482    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:43:07.852494    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:43:07.867604    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:43:07.867616    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:43:07.890859    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:43:07.890870    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:43:07.906843    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:43:07.906855    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:43:07.919069    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:43:07.919079    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:43:07.937276    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:43:07.937289    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:43:07.949048    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:43:07.949057    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:43:07.983385    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:43:07.983397    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:43:07.988004    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:43:07.988014    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:43:08.002509    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:43:08.002522    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:43:08.017873    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:43:08.017886    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:43:10.531401    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:15.533518    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:15.533996    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:43:15.571795    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:43:15.571924    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:43:15.592939    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:43:15.593033    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:43:15.608140    5115 logs.go:276] 2 containers: [99a859473d38 de37d84523f1]
	I0729 16:43:15.608207    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:43:15.620999    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:43:15.621064    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:43:15.633202    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:43:15.633273    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:43:15.644189    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:43:15.644255    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:43:15.655021    5115 logs.go:276] 0 containers: []
	W0729 16:43:15.655033    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:43:15.655096    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:43:15.666107    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:43:15.666124    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:43:15.666132    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:43:15.670213    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:43:15.670221    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:43:15.705608    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:43:15.705621    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:43:15.728145    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:43:15.728160    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:43:15.743551    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:43:15.743564    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:43:15.755167    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:43:15.755180    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:43:15.772120    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:43:15.772131    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:43:15.797305    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:43:15.797314    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:43:15.808276    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:43:15.808288    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:43:15.842648    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:43:15.842658    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:43:15.856720    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:43:15.856735    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:43:15.868221    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:43:15.868236    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:43:15.883475    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:43:15.883488    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:43:18.397067    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:23.399732    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:23.400032    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:43:23.428373    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:43:23.428498    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:43:23.449430    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:43:23.449515    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:43:23.462381    5115 logs.go:276] 2 containers: [99a859473d38 de37d84523f1]
	I0729 16:43:23.462455    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:43:23.473805    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:43:23.473872    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:43:23.484226    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:43:23.484294    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:43:23.494847    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:43:23.494917    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:43:23.505608    5115 logs.go:276] 0 containers: []
	W0729 16:43:23.505619    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:43:23.505679    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:43:23.516073    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:43:23.516086    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:43:23.516091    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:43:23.530234    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:43:23.530246    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:43:23.545551    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:43:23.545565    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:43:23.557099    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:43:23.557110    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:43:23.568547    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:43:23.568559    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:43:23.580125    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:43:23.580137    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:43:23.603512    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:43:23.603519    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:43:23.614574    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:43:23.614586    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:43:23.646885    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:43:23.646893    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:43:23.681319    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:43:23.681333    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:43:23.696444    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:43:23.696458    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:43:23.719315    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:43:23.719326    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:43:23.731636    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:43:23.731649    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:43:26.237938    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:31.240097    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:31.240394    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:43:31.270205    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:43:31.270329    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:43:31.288378    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:43:31.288468    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:43:31.301847    5115 logs.go:276] 2 containers: [99a859473d38 de37d84523f1]
	I0729 16:43:31.301924    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:43:31.313593    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:43:31.313661    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:43:31.325989    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:43:31.326062    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:43:31.336521    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:43:31.336591    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:43:31.346690    5115 logs.go:276] 0 containers: []
	W0729 16:43:31.346704    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:43:31.346757    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:43:31.357733    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:43:31.357750    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:43:31.357755    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:43:31.362152    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:43:31.362158    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:43:31.399267    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:43:31.399282    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:43:31.414330    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:43:31.414342    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:43:31.430051    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:43:31.430062    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:43:31.450632    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:43:31.450642    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:43:31.474462    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:43:31.474472    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:43:31.507041    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:43:31.507049    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:43:31.520764    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:43:31.520779    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:43:31.532104    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:43:31.532116    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:43:31.545008    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:43:31.545019    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:43:31.556568    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:43:31.556582    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:43:31.568083    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:43:31.568094    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:43:34.085567    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:39.087762    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:39.087997    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:43:39.119172    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:43:39.119262    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:43:39.133867    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:43:39.133945    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:43:39.146011    5115 logs.go:276] 2 containers: [99a859473d38 de37d84523f1]
	I0729 16:43:39.146070    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:43:39.156128    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:43:39.156193    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:43:39.166333    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:43:39.166410    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:43:39.178429    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:43:39.178498    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:43:39.188387    5115 logs.go:276] 0 containers: []
	W0729 16:43:39.188399    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:43:39.188455    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:43:39.199047    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:43:39.199062    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:43:39.199068    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:43:39.203781    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:43:39.203793    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:43:39.240596    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:43:39.240609    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:43:39.254368    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:43:39.254377    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:43:39.267105    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:43:39.267116    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:43:39.278674    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:43:39.278686    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:43:39.296912    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:43:39.296925    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:43:39.321142    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:43:39.321150    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:43:39.353907    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:43:39.353916    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:43:39.365616    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:43:39.365630    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:43:39.380249    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:43:39.380262    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:43:39.391814    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:43:39.391828    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:43:39.402994    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:43:39.403004    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:43:41.920175    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:46.922452    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:46.922848    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:43:46.961281    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:43:46.961412    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:43:46.980621    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:43:46.980708    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:43:46.994627    5115 logs.go:276] 2 containers: [99a859473d38 de37d84523f1]
	I0729 16:43:46.994702    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:43:47.006657    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:43:47.006733    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:43:47.017862    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:43:47.017927    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:43:47.029702    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:43:47.029773    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:43:47.039590    5115 logs.go:276] 0 containers: []
	W0729 16:43:47.039602    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:43:47.039661    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:43:47.049895    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:43:47.049908    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:43:47.049913    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:43:47.061160    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:43:47.061173    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:43:47.085433    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:43:47.085441    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:43:47.098931    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:43:47.098945    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:43:47.133567    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:43:47.133575    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:43:47.137817    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:43:47.137825    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:43:47.171831    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:43:47.171842    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:43:47.185941    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:43:47.185952    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:43:47.203656    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:43:47.203667    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:43:47.215494    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:43:47.215506    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:43:47.229653    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:43:47.229667    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:43:47.248966    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:43:47.248979    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:43:47.264006    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:43:47.264017    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:43:49.783225    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:43:54.785984    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:43:54.786419    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:43:54.827194    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:43:54.827326    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:43:54.848743    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:43:54.848839    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:43:54.864516    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:43:54.864604    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:43:54.877029    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:43:54.877103    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:43:54.888050    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:43:54.888121    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:43:54.898397    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:43:54.898463    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:43:54.909198    5115 logs.go:276] 0 containers: []
	W0729 16:43:54.909208    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:43:54.909258    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:43:54.919604    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:43:54.919625    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:43:54.919630    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:43:54.952410    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:43:54.952417    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:43:54.964183    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:43:54.964192    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:43:54.980374    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:43:54.980389    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:43:54.992842    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:43:54.992855    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:43:54.997072    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:43:54.997081    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:43:55.011427    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:43:55.011440    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:43:55.023421    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:43:55.023435    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:43:55.035054    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:43:55.035067    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:43:55.053778    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:43:55.053790    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:43:55.077524    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:43:55.077532    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:43:55.114035    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:43:55.114047    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:43:55.129095    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:43:55.129108    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:43:55.140415    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:43:55.140430    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:43:55.153795    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:43:55.153811    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:43:57.667178    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:02.669590    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:02.669646    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:02.681704    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:44:02.681761    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:02.692626    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:44:02.692682    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:02.705687    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:44:02.705746    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:02.721976    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:44:02.722043    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:02.734258    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:44:02.734320    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:02.746359    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:44:02.746435    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:02.757959    5115 logs.go:276] 0 containers: []
	W0729 16:44:02.757973    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:02.758032    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:02.770001    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:44:02.770019    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:44:02.770025    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:44:02.790911    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:02.790928    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:02.825735    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:44:02.825760    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:44:02.838770    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:44:02.838783    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:44:02.852241    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:44:02.852252    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:02.866142    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:02.866151    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:02.902779    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:44:02.902793    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:44:02.914975    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:44:02.914987    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:44:02.926776    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:44:02.926787    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:44:02.938649    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:44:02.938660    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:44:02.950356    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:02.950368    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:02.974966    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:02.974977    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:02.979284    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:44:02.979290    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:44:02.997727    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:44:02.997736    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:44:03.018266    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:44:03.018277    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:44:05.535431    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:10.537957    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:10.538083    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:10.556905    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:44:10.556989    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:10.568925    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:44:10.568991    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:10.579416    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:44:10.579499    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:10.589916    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:44:10.589976    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:10.600534    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:44:10.600589    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:10.610812    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:44:10.610880    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:10.620876    5115 logs.go:276] 0 containers: []
	W0729 16:44:10.620889    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:10.620939    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:10.632707    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:44:10.632725    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:44:10.632731    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:44:10.646465    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:44:10.646478    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:44:10.658244    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:44:10.658257    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:44:10.669748    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:44:10.669760    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:44:10.684685    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:10.684698    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:10.718074    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:10.718084    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:10.752589    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:44:10.752600    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:44:10.765084    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:44:10.765098    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:44:10.776191    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:44:10.776205    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:44:10.787329    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:44:10.787343    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:44:10.800786    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:10.800797    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:10.805611    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:44:10.805621    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:10.820351    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:10.820362    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:10.844837    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:44:10.844846    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:44:10.858433    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:44:10.858444    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:44:13.377156    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:18.377367    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:18.377768    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:18.413020    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:44:18.413138    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:18.432760    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:44:18.432853    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:18.448191    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:44:18.448265    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:18.459730    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:44:18.459802    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:18.472669    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:44:18.472732    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:18.483138    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:44:18.483206    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:18.493705    5115 logs.go:276] 0 containers: []
	W0729 16:44:18.493721    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:18.493776    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:18.504316    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:44:18.504337    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:18.504342    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:18.537717    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:18.537730    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:18.572191    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:44:18.572203    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:44:18.583931    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:44:18.583943    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:44:18.602884    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:18.602900    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:18.607008    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:44:18.607014    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:44:18.624665    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:18.624676    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:18.649792    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:44:18.649799    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:44:18.663882    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:44:18.663895    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:44:18.677677    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:44:18.677687    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:44:18.692946    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:44:18.692958    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:44:18.712562    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:44:18.712575    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:44:18.723881    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:44:18.723892    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:44:18.743091    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:44:18.743101    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:44:18.754573    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:44:18.754583    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:21.269967    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:26.272550    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:26.272643    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:26.286418    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:44:26.286480    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:26.299638    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:44:26.299696    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:26.312187    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:44:26.312251    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:26.328055    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:44:26.328116    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:26.340105    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:44:26.340165    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:26.357411    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:44:26.357460    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:26.367817    5115 logs.go:276] 0 containers: []
	W0729 16:44:26.367826    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:26.367874    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:26.384204    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:44:26.384221    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:44:26.384227    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:44:26.400910    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:44:26.400922    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:44:26.416648    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:44:26.416662    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:44:26.429942    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:26.429952    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:26.456890    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:26.456903    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:26.491834    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:44:26.491847    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:44:26.512930    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:44:26.512940    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:44:26.533060    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:26.533070    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:26.537683    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:44:26.537690    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:44:26.556066    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:44:26.556077    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:44:26.570326    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:44:26.570339    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:26.582440    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:26.582449    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:26.619645    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:44:26.619654    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:44:26.633689    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:44:26.633702    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:44:26.650631    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:44:26.650642    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:44:29.169414    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:34.172046    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:34.172492    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:34.210978    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:44:34.211113    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:34.232946    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:44:34.233017    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:34.248968    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:44:34.249036    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:34.262829    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:44:34.262902    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:34.280402    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:44:34.280492    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:34.293963    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:44:34.294035    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:34.306379    5115 logs.go:276] 0 containers: []
	W0729 16:44:34.306394    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:34.306455    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:34.318896    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:44:34.318916    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:34.318922    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:34.353675    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:44:34.353691    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:44:34.370895    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:44:34.370909    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:44:34.409577    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:34.409588    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:34.435560    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:34.435570    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:34.440576    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:44:34.440585    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:44:34.452172    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:44:34.452184    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:44:34.464338    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:44:34.464350    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:34.478075    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:34.478087    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:34.512524    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:44:34.512535    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:44:34.526789    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:44:34.526800    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:44:34.541368    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:44:34.541378    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:44:34.553432    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:44:34.553444    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:44:34.567540    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:44:34.567552    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:44:34.583062    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:44:34.583071    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:44:37.097295    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:42.099413    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:42.099882    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:42.140964    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:44:42.141114    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:42.163936    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:44:42.164038    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:42.179123    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:44:42.179192    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:42.191776    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:44:42.191853    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:42.202574    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:44:42.202641    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:42.213139    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:44:42.213201    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:42.224009    5115 logs.go:276] 0 containers: []
	W0729 16:44:42.224023    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:42.224086    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:42.234426    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:44:42.234445    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:44:42.234450    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:44:42.248364    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:44:42.248378    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:42.260117    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:42.260126    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:42.284376    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:44:42.284385    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:44:42.295815    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:44:42.295830    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:44:42.307276    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:44:42.307287    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:44:42.329127    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:44:42.329139    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:44:42.340322    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:44:42.340335    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:44:42.352074    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:42.352088    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:42.387765    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:44:42.387773    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:44:42.401957    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:44:42.401968    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:44:42.416316    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:44:42.416329    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:44:42.428366    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:42.428380    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:42.432893    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:42.432901    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:42.466615    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:44:42.466627    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:44:44.986490    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:49.989136    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:49.989243    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:50.000553    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:44:50.000624    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:50.012497    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:44:50.012561    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:50.025370    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:44:50.025420    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:50.035815    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:44:50.035866    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:50.047526    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:44:50.047602    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:50.059117    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:44:50.059187    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:50.070733    5115 logs.go:276] 0 containers: []
	W0729 16:44:50.070746    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:50.070793    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:50.082128    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:44:50.082143    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:44:50.082148    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:50.094215    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:50.094229    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:50.132329    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:44:50.132340    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:44:50.151506    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:44:50.151519    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:44:50.167709    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:44:50.167722    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:44:50.179696    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:44:50.179706    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:44:50.192242    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:44:50.192254    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:44:50.205207    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:50.205218    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:50.230858    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:50.230875    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:50.235367    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:44:50.235378    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:44:50.248495    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:44:50.248510    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:44:50.265910    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:50.265918    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:50.300534    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:44:50.300546    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:44:50.315077    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:44:50.315089    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:44:50.327720    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:44:50.327732    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:44:52.849323    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:44:57.851941    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:44:57.852421    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:44:57.890684    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:44:57.890823    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:44:57.913021    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:44:57.913132    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:44:57.929081    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:44:57.929166    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:44:57.940960    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:44:57.941025    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:44:57.951786    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:44:57.951851    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:44:57.962443    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:44:57.962508    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:44:57.973199    5115 logs.go:276] 0 containers: []
	W0729 16:44:57.973211    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:44:57.973276    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:44:57.983813    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:44:57.983834    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:44:57.983839    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:44:57.995655    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:44:57.995669    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:44:58.009551    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:44:58.009566    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:44:58.021915    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:44:58.021927    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:44:58.026517    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:44:58.026527    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:44:58.041838    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:44:58.041849    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:44:58.055904    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:44:58.055915    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:44:58.067666    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:44:58.067679    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:44:58.085992    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:44:58.086005    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:44:58.118427    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:44:58.118433    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:44:58.151596    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:44:58.151610    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:44:58.163410    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:44:58.163421    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:44:58.187010    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:44:58.187020    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:44:58.198478    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:44:58.198489    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:44:58.210273    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:44:58.210287    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:45:00.727848    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:05.730463    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:05.730855    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:05.760304    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:45:05.760423    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:05.779683    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:45:05.779773    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:05.793548    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:45:05.793629    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:05.805372    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:45:05.805443    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:05.815524    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:45:05.815589    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:05.826198    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:45:05.826262    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:05.836139    5115 logs.go:276] 0 containers: []
	W0729 16:45:05.836154    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:05.836205    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:05.847378    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:45:05.847396    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:45:05.847401    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:45:05.861889    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:45:05.861904    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:45:05.873419    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:05.873433    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:05.897031    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:05.897042    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:05.901182    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:45:05.901188    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:45:05.914670    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:45:05.914681    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:45:05.926656    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:45:05.926667    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:45:05.937930    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:45:05.937944    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:45:05.949841    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:45:05.949852    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:45:05.961883    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:05.961899    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:05.995673    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:05.995682    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:06.030499    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:45:06.030510    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:45:06.042331    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:45:06.042342    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:45:06.059292    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:45:06.059303    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:06.071301    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:45:06.071320    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:45:08.588607    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:13.590229    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:13.590338    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:13.605908    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:45:13.605984    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:13.617151    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:45:13.617222    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:13.628702    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:45:13.628789    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:13.640553    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:45:13.640623    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:13.652041    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:45:13.652116    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:13.662629    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:45:13.662702    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:13.673802    5115 logs.go:276] 0 containers: []
	W0729 16:45:13.673814    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:13.673858    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:13.684955    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:45:13.684977    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:45:13.684983    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:45:13.702367    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:13.702382    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:13.707712    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:13.707724    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:13.745208    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:45:13.745221    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:45:13.760578    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:45:13.760593    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:45:13.782825    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:45:13.782839    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:45:13.799423    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:13.799436    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:13.834968    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:45:13.834984    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:45:13.848794    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:45:13.848806    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:45:13.860570    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:45:13.860578    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:45:13.876595    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:45:13.876607    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:45:13.890078    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:45:13.890094    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:13.902965    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:45:13.902975    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:45:13.915629    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:45:13.915642    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:45:13.932326    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:13.932337    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:16.459368    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:21.461038    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:21.461268    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:21.481003    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:45:21.481113    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:21.496867    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:45:21.496942    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:21.508528    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:45:21.508600    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:21.518340    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:45:21.518405    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:21.528436    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:45:21.528504    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:21.543223    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:45:21.543285    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:21.553801    5115 logs.go:276] 0 containers: []
	W0729 16:45:21.553813    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:21.553881    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:21.564131    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:45:21.564149    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:45:21.564154    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:45:21.576371    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:45:21.576385    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:45:21.594247    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:21.594258    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:21.617576    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:45:21.617583    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:45:21.631385    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:45:21.631394    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:45:21.642696    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:45:21.642710    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:45:21.654206    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:21.654218    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:21.659092    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:45:21.659100    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:45:21.670715    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:45:21.670726    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:45:21.686190    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:45:21.686205    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:21.698546    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:21.698558    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:21.732917    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:21.732927    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:21.768766    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:45:21.768778    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:45:21.793887    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:45:21.793901    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:45:21.807409    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:45:21.807421    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:45:24.320848    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:29.323552    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:29.324047    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 16:45:29.355838    5115 logs.go:276] 1 containers: [bb81ed0f3180]
	I0729 16:45:29.355987    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 16:45:29.376336    5115 logs.go:276] 1 containers: [929c951f29cc]
	I0729 16:45:29.376439    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 16:45:29.391327    5115 logs.go:276] 4 containers: [4e9041d47ee8 423bbb68c097 99a859473d38 de37d84523f1]
	I0729 16:45:29.391391    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 16:45:29.403342    5115 logs.go:276] 1 containers: [9f649cfcb9a1]
	I0729 16:45:29.403412    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 16:45:29.414477    5115 logs.go:276] 1 containers: [c57b8ec13eff]
	I0729 16:45:29.414543    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 16:45:29.428870    5115 logs.go:276] 1 containers: [57d7e217058a]
	I0729 16:45:29.428937    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 16:45:29.439331    5115 logs.go:276] 0 containers: []
	W0729 16:45:29.439347    5115 logs.go:278] No container was found matching "kindnet"
	I0729 16:45:29.439399    5115 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 16:45:29.450004    5115 logs.go:276] 1 containers: [5865082a29a7]
	I0729 16:45:29.450026    5115 logs.go:123] Gathering logs for etcd [929c951f29cc] ...
	I0729 16:45:29.450030    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 929c951f29cc"
	I0729 16:45:29.468724    5115 logs.go:123] Gathering logs for Docker ...
	I0729 16:45:29.468738    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 16:45:29.491147    5115 logs.go:123] Gathering logs for container status ...
	I0729 16:45:29.491154    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 16:45:29.502397    5115 logs.go:123] Gathering logs for coredns [423bbb68c097] ...
	I0729 16:45:29.502409    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 423bbb68c097"
	I0729 16:45:29.514879    5115 logs.go:123] Gathering logs for coredns [99a859473d38] ...
	I0729 16:45:29.514892    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a859473d38"
	I0729 16:45:29.526354    5115 logs.go:123] Gathering logs for kube-proxy [c57b8ec13eff] ...
	I0729 16:45:29.526365    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57b8ec13eff"
	I0729 16:45:29.538810    5115 logs.go:123] Gathering logs for storage-provisioner [5865082a29a7] ...
	I0729 16:45:29.538822    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5865082a29a7"
	I0729 16:45:29.551211    5115 logs.go:123] Gathering logs for kube-scheduler [9f649cfcb9a1] ...
	I0729 16:45:29.551226    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f649cfcb9a1"
	I0729 16:45:29.566713    5115 logs.go:123] Gathering logs for kube-controller-manager [57d7e217058a] ...
	I0729 16:45:29.566726    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d7e217058a"
	I0729 16:45:29.584620    5115 logs.go:123] Gathering logs for dmesg ...
	I0729 16:45:29.584631    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 16:45:29.588843    5115 logs.go:123] Gathering logs for kube-apiserver [bb81ed0f3180] ...
	I0729 16:45:29.588853    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb81ed0f3180"
	I0729 16:45:29.603264    5115 logs.go:123] Gathering logs for coredns [4e9041d47ee8] ...
	I0729 16:45:29.603277    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e9041d47ee8"
	I0729 16:45:29.614969    5115 logs.go:123] Gathering logs for coredns [de37d84523f1] ...
	I0729 16:45:29.614979    5115 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de37d84523f1"
	I0729 16:45:29.626646    5115 logs.go:123] Gathering logs for kubelet ...
	I0729 16:45:29.626662    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 16:45:29.659029    5115 logs.go:123] Gathering logs for describe nodes ...
	I0729 16:45:29.659040    5115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 16:45:32.196386    5115 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 16:45:37.198960    5115 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 16:45:37.202288    5115 out.go:177] 
	W0729 16:45:37.209274    5115 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 16:45:37.209302    5115 out.go:239] * 
	* 
	W0729 16:45:37.209734    5115 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:45:37.217193    5115 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-170000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (581.02s)

                                                
                                    
x
+
TestPause/serial/Start (10.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-291000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-291000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.040497625s)

                                                
                                                
-- stdout --
	* [pause-291000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-291000" primary control-plane node in "pause-291000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-291000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-291000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-291000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-291000 -n pause-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-291000 -n pause-291000: exit status 7 (41.491458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-365000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-365000 --driver=qemu2 : exit status 80 (9.914937541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-365000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-365000" primary control-plane node in "NoKubernetes-365000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-365000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-365000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-365000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-365000 -n NoKubernetes-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-365000 -n NoKubernetes-365000: exit status 7 (64.757125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-365000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-365000 --no-kubernetes --driver=qemu2 : exit status 80 (5.260639708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-365000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-365000
	* Restarting existing qemu2 VM for "NoKubernetes-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-365000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-365000 -n NoKubernetes-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-365000 -n NoKubernetes-365000: exit status 7 (59.671125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-365000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-365000 --no-kubernetes --driver=qemu2 : exit status 80 (5.245095708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-365000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-365000
	* Restarting existing qemu2 VM for "NoKubernetes-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-365000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-365000 -n NoKubernetes-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-365000 -n NoKubernetes-365000: exit status 7 (56.655792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-365000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-365000 --driver=qemu2 : exit status 80 (5.276769792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-365000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-365000
	* Restarting existing qemu2 VM for "NoKubernetes-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-365000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-365000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-365000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-365000 -n NoKubernetes-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-365000 -n NoKubernetes-365000: exit status 7 (47.111417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-365000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.935810834s)

                                                
                                                
-- stdout --
	* [auto-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-295000" primary control-plane node in "auto-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:43:50.587690    5331 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:43:50.587811    5331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:43:50.587814    5331 out.go:304] Setting ErrFile to fd 2...
	I0729 16:43:50.587816    5331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:43:50.587950    5331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:43:50.589071    5331 out.go:298] Setting JSON to false
	I0729 16:43:50.605765    5331 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4397,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:43:50.605834    5331 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:43:50.611452    5331 out.go:177] * [auto-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:43:50.619319    5331 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:43:50.619342    5331 notify.go:220] Checking for updates...
	I0729 16:43:50.626427    5331 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:43:50.629448    5331 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:43:50.632467    5331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:43:50.635453    5331 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:43:50.636904    5331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:43:50.639786    5331 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:43:50.639855    5331 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:43:50.639898    5331 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:43:50.644418    5331 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:43:50.649446    5331 start.go:297] selected driver: qemu2
	I0729 16:43:50.649454    5331 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:43:50.649460    5331 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:43:50.651863    5331 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:43:50.656450    5331 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:43:50.657783    5331 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:43:50.657798    5331 cni.go:84] Creating CNI manager for ""
	I0729 16:43:50.657807    5331 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:43:50.657811    5331 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:43:50.657837    5331 start.go:340] cluster config:
	{Name:auto-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:43:50.661522    5331 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:43:50.668447    5331 out.go:177] * Starting "auto-295000" primary control-plane node in "auto-295000" cluster
	I0729 16:43:50.672400    5331 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:43:50.672414    5331 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:43:50.672420    5331 cache.go:56] Caching tarball of preloaded images
	I0729 16:43:50.672489    5331 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:43:50.672494    5331 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:43:50.672542    5331 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/auto-295000/config.json ...
	I0729 16:43:50.672553    5331 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/auto-295000/config.json: {Name:mkd2dc104d5ca7a3d20bc4fdf8441d6965b57378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:43:50.672934    5331 start.go:360] acquireMachinesLock for auto-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:43:50.672964    5331 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "auto-295000"
	I0729 16:43:50.672975    5331 start.go:93] Provisioning new machine with config: &{Name:auto-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:43:50.673008    5331 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:43:50.676552    5331 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:43:50.692303    5331 start.go:159] libmachine.API.Create for "auto-295000" (driver="qemu2")
	I0729 16:43:50.692327    5331 client.go:168] LocalClient.Create starting
	I0729 16:43:50.692384    5331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:43:50.692414    5331 main.go:141] libmachine: Decoding PEM data...
	I0729 16:43:50.692423    5331 main.go:141] libmachine: Parsing certificate...
	I0729 16:43:50.692465    5331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:43:50.692492    5331 main.go:141] libmachine: Decoding PEM data...
	I0729 16:43:50.692502    5331 main.go:141] libmachine: Parsing certificate...
	I0729 16:43:50.692960    5331 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:43:50.840310    5331 main.go:141] libmachine: Creating SSH key...
	I0729 16:43:50.926289    5331 main.go:141] libmachine: Creating Disk image...
	I0729 16:43:50.926294    5331 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:43:50.926504    5331 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2
	I0729 16:43:50.935969    5331 main.go:141] libmachine: STDOUT: 
	I0729 16:43:50.936003    5331 main.go:141] libmachine: STDERR: 
	I0729 16:43:50.936052    5331 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2 +20000M
	I0729 16:43:50.943980    5331 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:43:50.943995    5331 main.go:141] libmachine: STDERR: 
	I0729 16:43:50.944008    5331 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2
	I0729 16:43:50.944013    5331 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:43:50.944025    5331 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:43:50.944048    5331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:85:d1:9f:65:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2
	I0729 16:43:50.945643    5331 main.go:141] libmachine: STDOUT: 
	I0729 16:43:50.945658    5331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:43:50.945675    5331 client.go:171] duration metric: took 253.351667ms to LocalClient.Create
	I0729 16:43:52.947826    5331 start.go:128] duration metric: took 2.27485025s to createHost
	I0729 16:43:52.947961    5331 start.go:83] releasing machines lock for "auto-295000", held for 2.275044625s
	W0729 16:43:52.948063    5331 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:43:52.955431    5331 out.go:177] * Deleting "auto-295000" in qemu2 ...
	W0729 16:43:52.983835    5331 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:43:52.983874    5331 start.go:729] Will try again in 5 seconds ...
	I0729 16:43:57.984845    5331 start.go:360] acquireMachinesLock for auto-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:43:57.985192    5331 start.go:364] duration metric: took 248.375µs to acquireMachinesLock for "auto-295000"
	I0729 16:43:57.985232    5331 start.go:93] Provisioning new machine with config: &{Name:auto-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:43:57.985401    5331 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:43:57.995728    5331 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:43:58.031574    5331 start.go:159] libmachine.API.Create for "auto-295000" (driver="qemu2")
	I0729 16:43:58.031638    5331 client.go:168] LocalClient.Create starting
	I0729 16:43:58.031747    5331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:43:58.031798    5331 main.go:141] libmachine: Decoding PEM data...
	I0729 16:43:58.031814    5331 main.go:141] libmachine: Parsing certificate...
	I0729 16:43:58.031877    5331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:43:58.031920    5331 main.go:141] libmachine: Decoding PEM data...
	I0729 16:43:58.031937    5331 main.go:141] libmachine: Parsing certificate...
	I0729 16:43:58.032492    5331 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:43:58.186350    5331 main.go:141] libmachine: Creating SSH key...
	I0729 16:43:58.436016    5331 main.go:141] libmachine: Creating Disk image...
	I0729 16:43:58.436037    5331 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:43:58.436290    5331 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2
	I0729 16:43:58.445997    5331 main.go:141] libmachine: STDOUT: 
	I0729 16:43:58.446014    5331 main.go:141] libmachine: STDERR: 
	I0729 16:43:58.446069    5331 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2 +20000M
	I0729 16:43:58.454794    5331 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:43:58.454816    5331 main.go:141] libmachine: STDERR: 
	I0729 16:43:58.454832    5331 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2
	I0729 16:43:58.454838    5331 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:43:58.454848    5331 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:43:58.454886    5331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:4a:21:11:53:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/auto-295000/disk.qcow2
	I0729 16:43:58.456889    5331 main.go:141] libmachine: STDOUT: 
	I0729 16:43:58.456904    5331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:43:58.456917    5331 client.go:171] duration metric: took 425.287167ms to LocalClient.Create
	I0729 16:44:00.457297    5331 start.go:128] duration metric: took 2.4719525s to createHost
	I0729 16:44:00.457341    5331 start.go:83] releasing machines lock for "auto-295000", held for 2.472209125s
	W0729 16:44:00.457482    5331 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:00.469817    5331 out.go:177] 
	W0729 16:44:00.473893    5331 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:44:00.473909    5331 out.go:239] * 
	* 
	W0729 16:44:00.474925    5331 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:44:00.488850    5331 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.835815833s)

                                                
                                                
-- stdout --
	* [calico-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-295000" primary control-plane node in "calico-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:44:02.599064    5441 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:44:02.599195    5441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:02.599199    5441 out.go:304] Setting ErrFile to fd 2...
	I0729 16:44:02.599201    5441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:02.599325    5441 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:44:02.600392    5441 out.go:298] Setting JSON to false
	I0729 16:44:02.616560    5441 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4409,"bootTime":1722292233,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:44:02.616640    5441 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:44:02.623172    5441 out.go:177] * [calico-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:44:02.629958    5441 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:44:02.630007    5441 notify.go:220] Checking for updates...
	I0729 16:44:02.637134    5441 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:44:02.638772    5441 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:44:02.642176    5441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:44:02.645157    5441 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:44:02.648154    5441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:44:02.651499    5441 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:44:02.651565    5441 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:44:02.651619    5441 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:44:02.656124    5441 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:44:02.663061    5441 start.go:297] selected driver: qemu2
	I0729 16:44:02.663070    5441 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:44:02.663076    5441 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:44:02.665456    5441 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:44:02.669114    5441 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:44:02.672255    5441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:44:02.672271    5441 cni.go:84] Creating CNI manager for "calico"
	I0729 16:44:02.672275    5441 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 16:44:02.672318    5441 start.go:340] cluster config:
	{Name:calico-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:44:02.676507    5441 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:44:02.681130    5441 out.go:177] * Starting "calico-295000" primary control-plane node in "calico-295000" cluster
	I0729 16:44:02.689099    5441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:44:02.689133    5441 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:44:02.689141    5441 cache.go:56] Caching tarball of preloaded images
	I0729 16:44:02.689235    5441 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:44:02.689243    5441 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:44:02.689327    5441 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/calico-295000/config.json ...
	I0729 16:44:02.689338    5441 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/calico-295000/config.json: {Name:mkd8d46500406b3374b600775c6077d53de8d01a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:44:02.689657    5441 start.go:360] acquireMachinesLock for calico-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:02.689692    5441 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "calico-295000"
	I0729 16:44:02.689705    5441 start.go:93] Provisioning new machine with config: &{Name:calico-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:02.689741    5441 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:02.694202    5441 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:44:02.710660    5441 start.go:159] libmachine.API.Create for "calico-295000" (driver="qemu2")
	I0729 16:44:02.710691    5441 client.go:168] LocalClient.Create starting
	I0729 16:44:02.710771    5441 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:44:02.710805    5441 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:02.710821    5441 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:02.710858    5441 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:44:02.710885    5441 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:02.710899    5441 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:02.711270    5441 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:02.860894    5441 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:02.934914    5441 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:02.934934    5441 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:02.935187    5441 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2
	I0729 16:44:02.945292    5441 main.go:141] libmachine: STDOUT: 
	I0729 16:44:02.945331    5441 main.go:141] libmachine: STDERR: 
	I0729 16:44:02.945399    5441 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2 +20000M
	I0729 16:44:02.954279    5441 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:02.954304    5441 main.go:141] libmachine: STDERR: 
	I0729 16:44:02.954330    5441 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2
	I0729 16:44:02.954336    5441 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:02.954349    5441 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:02.954374    5441 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:15:f1:15:ee:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2
	I0729 16:44:02.956423    5441 main.go:141] libmachine: STDOUT: 
	I0729 16:44:02.956441    5441 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:02.956460    5441 client.go:171] duration metric: took 245.7705ms to LocalClient.Create
	I0729 16:44:04.958631    5441 start.go:128] duration metric: took 2.268928208s to createHost
	I0729 16:44:04.958748    5441 start.go:83] releasing machines lock for "calico-295000", held for 2.269113875s
	W0729 16:44:04.958797    5441 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:04.970284    5441 out.go:177] * Deleting "calico-295000" in qemu2 ...
	W0729 16:44:04.998577    5441 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:04.998613    5441 start.go:729] Will try again in 5 seconds ...
	I0729 16:44:10.000638    5441 start.go:360] acquireMachinesLock for calico-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:10.001231    5441 start.go:364] duration metric: took 469.458µs to acquireMachinesLock for "calico-295000"
	I0729 16:44:10.001399    5441 start.go:93] Provisioning new machine with config: &{Name:calico-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:10.001707    5441 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:10.012318    5441 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:44:10.055029    5441 start.go:159] libmachine.API.Create for "calico-295000" (driver="qemu2")
	I0729 16:44:10.055084    5441 client.go:168] LocalClient.Create starting
	I0729 16:44:10.055196    5441 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:44:10.055256    5441 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:10.055275    5441 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:10.055329    5441 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:44:10.055368    5441 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:10.055382    5441 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:10.055879    5441 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:10.213041    5441 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:10.349424    5441 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:10.349440    5441 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:10.349670    5441 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2
	I0729 16:44:10.359169    5441 main.go:141] libmachine: STDOUT: 
	I0729 16:44:10.359186    5441 main.go:141] libmachine: STDERR: 
	I0729 16:44:10.359245    5441 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2 +20000M
	I0729 16:44:10.367303    5441 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:10.367317    5441 main.go:141] libmachine: STDERR: 
	I0729 16:44:10.367330    5441 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2
	I0729 16:44:10.367336    5441 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:10.367345    5441 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:10.367372    5441 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:82:64:5c:67:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/calico-295000/disk.qcow2
	I0729 16:44:10.369090    5441 main.go:141] libmachine: STDOUT: 
	I0729 16:44:10.369106    5441 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:10.369118    5441 client.go:171] duration metric: took 314.036792ms to LocalClient.Create
	I0729 16:44:12.371141    5441 start.go:128] duration metric: took 2.369486125s to createHost
	I0729 16:44:12.371183    5441 start.go:83] releasing machines lock for "calico-295000", held for 2.369973833s
	W0729 16:44:12.371300    5441 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:12.379648    5441 out.go:177] 
	W0729 16:44:12.388501    5441 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:44:12.388507    5441 out.go:239] * 
	* 
	W0729 16:44:12.389005    5441 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:44:12.401586    5441 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
E0729 16:44:22.594192    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.776126375s)

                                                
                                                
-- stdout --
	* [custom-flannel-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-295000" primary control-plane node in "custom-flannel-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:44:14.713684    5558 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:44:14.713824    5558 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:14.713827    5558 out.go:304] Setting ErrFile to fd 2...
	I0729 16:44:14.713830    5558 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:14.713952    5558 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:44:14.715054    5558 out.go:298] Setting JSON to false
	I0729 16:44:14.731505    5558 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4421,"bootTime":1722292233,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:44:14.731603    5558 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:44:14.738352    5558 out.go:177] * [custom-flannel-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:44:14.742420    5558 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:44:14.742450    5558 notify.go:220] Checking for updates...
	I0729 16:44:14.749431    5558 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:44:14.751019    5558 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:44:14.755351    5558 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:44:14.759390    5558 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:44:14.762350    5558 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:44:14.765765    5558 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:44:14.765827    5558 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:44:14.765878    5558 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:44:14.769375    5558 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:44:14.776311    5558 start.go:297] selected driver: qemu2
	I0729 16:44:14.776317    5558 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:44:14.776322    5558 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:44:14.778505    5558 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:44:14.782424    5558 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:44:14.785439    5558 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:44:14.785454    5558 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 16:44:14.785462    5558 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 16:44:14.785493    5558 start.go:340] cluster config:
	{Name:custom-flannel-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:44:14.789200    5558 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:44:14.797204    5558 out.go:177] * Starting "custom-flannel-295000" primary control-plane node in "custom-flannel-295000" cluster
	I0729 16:44:14.801378    5558 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:44:14.801392    5558 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:44:14.801401    5558 cache.go:56] Caching tarball of preloaded images
	I0729 16:44:14.801456    5558 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:44:14.801461    5558 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:44:14.801510    5558 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/custom-flannel-295000/config.json ...
	I0729 16:44:14.801521    5558 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/custom-flannel-295000/config.json: {Name:mk524735bba0b55da501895eed2bbc7f092414d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:44:14.801737    5558 start.go:360] acquireMachinesLock for custom-flannel-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:14.801770    5558 start.go:364] duration metric: took 26.5µs to acquireMachinesLock for "custom-flannel-295000"
	I0729 16:44:14.801782    5558 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:14.801816    5558 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:14.810343    5558 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:44:14.826703    5558 start.go:159] libmachine.API.Create for "custom-flannel-295000" (driver="qemu2")
	I0729 16:44:14.826736    5558 client.go:168] LocalClient.Create starting
	I0729 16:44:14.826800    5558 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:44:14.826828    5558 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:14.826839    5558 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:14.826875    5558 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:44:14.826898    5558 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:14.826906    5558 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:14.827250    5558 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:14.975886    5558 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:15.059525    5558 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:15.059531    5558 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:15.059753    5558 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2
	I0729 16:44:15.068936    5558 main.go:141] libmachine: STDOUT: 
	I0729 16:44:15.068959    5558 main.go:141] libmachine: STDERR: 
	I0729 16:44:15.069010    5558 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2 +20000M
	I0729 16:44:15.076875    5558 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:15.076893    5558 main.go:141] libmachine: STDERR: 
	I0729 16:44:15.076907    5558 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2
	I0729 16:44:15.076913    5558 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:15.076931    5558 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:15.076962    5558 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:15:9c:a4:b0:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2
	I0729 16:44:15.078562    5558 main.go:141] libmachine: STDOUT: 
	I0729 16:44:15.078585    5558 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:15.078605    5558 client.go:171] duration metric: took 251.871625ms to LocalClient.Create
	I0729 16:44:17.080813    5558 start.go:128] duration metric: took 2.279040958s to createHost
	I0729 16:44:17.080886    5558 start.go:83] releasing machines lock for "custom-flannel-295000", held for 2.279175583s
	W0729 16:44:17.080956    5558 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:17.091348    5558 out.go:177] * Deleting "custom-flannel-295000" in qemu2 ...
	W0729 16:44:17.122530    5558 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:17.122560    5558 start.go:729] Will try again in 5 seconds ...
	I0729 16:44:22.124634    5558 start.go:360] acquireMachinesLock for custom-flannel-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:22.125160    5558 start.go:364] duration metric: took 442.625µs to acquireMachinesLock for "custom-flannel-295000"
	I0729 16:44:22.125297    5558 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:22.125582    5558 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:22.137296    5558 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:44:22.188810    5558 start.go:159] libmachine.API.Create for "custom-flannel-295000" (driver="qemu2")
	I0729 16:44:22.188864    5558 client.go:168] LocalClient.Create starting
	I0729 16:44:22.188986    5558 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:44:22.189076    5558 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:22.189090    5558 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:22.189149    5558 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:44:22.189202    5558 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:22.189225    5558 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:22.189851    5558 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:22.348988    5558 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:22.402447    5558 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:22.402454    5558 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:22.402688    5558 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2
	I0729 16:44:22.412349    5558 main.go:141] libmachine: STDOUT: 
	I0729 16:44:22.412369    5558 main.go:141] libmachine: STDERR: 
	I0729 16:44:22.412430    5558 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2 +20000M
	I0729 16:44:22.420830    5558 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:22.420848    5558 main.go:141] libmachine: STDERR: 
	I0729 16:44:22.420860    5558 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2
	I0729 16:44:22.420866    5558 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:22.420881    5558 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:22.420913    5558 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:ca:47:2e:24:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/custom-flannel-295000/disk.qcow2
	I0729 16:44:22.422614    5558 main.go:141] libmachine: STDOUT: 
	I0729 16:44:22.422640    5558 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:22.422651    5558 client.go:171] duration metric: took 233.789541ms to LocalClient.Create
	I0729 16:44:24.424666    5558 start.go:128] duration metric: took 2.299119125s to createHost
	I0729 16:44:24.424679    5558 start.go:83] releasing machines lock for "custom-flannel-295000", held for 2.29957s
	W0729 16:44:24.424765    5558 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:24.436032    5558 out.go:177] 
	W0729 16:44:24.441037    5558 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:44:24.441043    5558 out.go:239] * 
	* 
	W0729 16:44:24.441566    5558 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:44:24.451028    5558 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.857604s)

                                                
                                                
-- stdout --
	* [false-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-295000" primary control-plane node in "false-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:44:26.816589    5678 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:44:26.816721    5678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:26.816724    5678 out.go:304] Setting ErrFile to fd 2...
	I0729 16:44:26.816727    5678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:26.816869    5678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:44:26.817831    5678 out.go:298] Setting JSON to false
	I0729 16:44:26.834062    5678 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4433,"bootTime":1722292233,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:44:26.834128    5678 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:44:26.840848    5678 out.go:177] * [false-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:44:26.848724    5678 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:44:26.848830    5678 notify.go:220] Checking for updates...
	I0729 16:44:26.855705    5678 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:44:26.858681    5678 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:44:26.861669    5678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:44:26.864730    5678 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:44:26.867708    5678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:44:26.871010    5678 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:44:26.871074    5678 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:44:26.871115    5678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:44:26.875725    5678 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:44:26.882711    5678 start.go:297] selected driver: qemu2
	I0729 16:44:26.882719    5678 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:44:26.882727    5678 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:44:26.885005    5678 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:44:26.888734    5678 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:44:26.891779    5678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:44:26.891795    5678 cni.go:84] Creating CNI manager for "false"
	I0729 16:44:26.891826    5678 start.go:340] cluster config:
	{Name:false-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:44:26.895134    5678 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:44:26.902701    5678 out.go:177] * Starting "false-295000" primary control-plane node in "false-295000" cluster
	I0729 16:44:26.906733    5678 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:44:26.906751    5678 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:44:26.906757    5678 cache.go:56] Caching tarball of preloaded images
	I0729 16:44:26.906808    5678 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:44:26.906813    5678 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:44:26.906857    5678 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/false-295000/config.json ...
	I0729 16:44:26.906867    5678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/false-295000/config.json: {Name:mk9736f573e10f97dce20c4965070c1d6cdaa3a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:44:26.907158    5678 start.go:360] acquireMachinesLock for false-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:26.907188    5678 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "false-295000"
	I0729 16:44:26.907199    5678 start.go:93] Provisioning new machine with config: &{Name:false-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:26.907229    5678 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:26.914654    5678 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:44:26.929661    5678 start.go:159] libmachine.API.Create for "false-295000" (driver="qemu2")
	I0729 16:44:26.929683    5678 client.go:168] LocalClient.Create starting
	I0729 16:44:26.929744    5678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:44:26.929774    5678 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:26.929784    5678 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:26.929819    5678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:44:26.929841    5678 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:26.929854    5678 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:26.930220    5678 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:27.081281    5678 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:27.156928    5678 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:27.156934    5678 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:27.157164    5678 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2
	I0729 16:44:27.166301    5678 main.go:141] libmachine: STDOUT: 
	I0729 16:44:27.166317    5678 main.go:141] libmachine: STDERR: 
	I0729 16:44:27.166364    5678 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2 +20000M
	I0729 16:44:27.174195    5678 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:27.174216    5678 main.go:141] libmachine: STDERR: 
	I0729 16:44:27.174234    5678 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2
	I0729 16:44:27.174237    5678 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:27.174250    5678 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:27.174284    5678 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:52:b3:0e:76:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2
	I0729 16:44:27.175882    5678 main.go:141] libmachine: STDOUT: 
	I0729 16:44:27.175896    5678 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:27.175912    5678 client.go:171] duration metric: took 246.2335ms to LocalClient.Create
	I0729 16:44:29.178030    5678 start.go:128] duration metric: took 2.270849208s to createHost
	I0729 16:44:29.178087    5678 start.go:83] releasing machines lock for "false-295000", held for 2.270957917s
	W0729 16:44:29.178152    5678 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:29.193869    5678 out.go:177] * Deleting "false-295000" in qemu2 ...
	W0729 16:44:29.219383    5678 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:29.219414    5678 start.go:729] Will try again in 5 seconds ...
	I0729 16:44:34.221394    5678 start.go:360] acquireMachinesLock for false-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:34.221623    5678 start.go:364] duration metric: took 189.5µs to acquireMachinesLock for "false-295000"
	I0729 16:44:34.221650    5678 start.go:93] Provisioning new machine with config: &{Name:false-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:34.221726    5678 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:34.229982    5678 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:44:34.252863    5678 start.go:159] libmachine.API.Create for "false-295000" (driver="qemu2")
	I0729 16:44:34.252906    5678 client.go:168] LocalClient.Create starting
	I0729 16:44:34.252995    5678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:44:34.253039    5678 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:34.253050    5678 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:34.253092    5678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:44:34.253120    5678 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:34.253127    5678 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:34.253470    5678 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:34.402108    5678 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:34.582165    5678 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:34.582177    5678 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:34.582438    5678 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2
	I0729 16:44:34.593001    5678 main.go:141] libmachine: STDOUT: 
	I0729 16:44:34.593025    5678 main.go:141] libmachine: STDERR: 
	I0729 16:44:34.593088    5678 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2 +20000M
	I0729 16:44:34.601936    5678 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:34.601953    5678 main.go:141] libmachine: STDERR: 
	I0729 16:44:34.601966    5678 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2
	I0729 16:44:34.601971    5678 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:34.601982    5678 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:34.602015    5678 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a8:b4:34:4d:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/false-295000/disk.qcow2
	I0729 16:44:34.603880    5678 main.go:141] libmachine: STDOUT: 
	I0729 16:44:34.603898    5678 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:34.603910    5678 client.go:171] duration metric: took 351.008833ms to LocalClient.Create
	I0729 16:44:36.605959    5678 start.go:128] duration metric: took 2.384291792s to createHost
	I0729 16:44:36.605996    5678 start.go:83] releasing machines lock for "false-295000", held for 2.384434292s
	W0729 16:44:36.606186    5678 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:36.616489    5678 out.go:177] 
	W0729 16:44:36.622655    5678 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:44:36.622667    5678 out.go:239] * 
	* 
	W0729 16:44:36.623703    5678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:44:36.636676    5678 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0729 16:44:39.521593    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.91098225s)

                                                
                                                
-- stdout --
	* [kindnet-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-295000" primary control-plane node in "kindnet-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:44:38.780574    5787 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:44:38.780695    5787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:38.780699    5787 out.go:304] Setting ErrFile to fd 2...
	I0729 16:44:38.780701    5787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:38.780847    5787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:44:38.781927    5787 out.go:298] Setting JSON to false
	I0729 16:44:38.798242    5787 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4445,"bootTime":1722292233,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:44:38.798308    5787 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:44:38.804568    5787 out.go:177] * [kindnet-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:44:38.812514    5787 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:44:38.812596    5787 notify.go:220] Checking for updates...
	I0729 16:44:38.820527    5787 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:44:38.823614    5787 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:44:38.828431    5787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:44:38.836447    5787 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:44:38.839549    5787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:44:38.842763    5787 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:44:38.842829    5787 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:44:38.842882    5787 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:44:38.846546    5787 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:44:38.853493    5787 start.go:297] selected driver: qemu2
	I0729 16:44:38.853500    5787 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:44:38.853506    5787 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:44:38.855782    5787 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:44:38.859502    5787 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:44:38.862574    5787 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:44:38.862608    5787 cni.go:84] Creating CNI manager for "kindnet"
	I0729 16:44:38.862615    5787 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 16:44:38.862648    5787 start.go:340] cluster config:
	{Name:kindnet-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:44:38.866197    5787 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:44:38.874487    5787 out.go:177] * Starting "kindnet-295000" primary control-plane node in "kindnet-295000" cluster
	I0729 16:44:38.877414    5787 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:44:38.877430    5787 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:44:38.877442    5787 cache.go:56] Caching tarball of preloaded images
	I0729 16:44:38.877507    5787 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:44:38.877512    5787 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:44:38.877580    5787 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/kindnet-295000/config.json ...
	I0729 16:44:38.877590    5787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/kindnet-295000/config.json: {Name:mkbb082726c15c8ef985dd8478b9615ce2c5ad86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:44:38.877797    5787 start.go:360] acquireMachinesLock for kindnet-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:38.877826    5787 start.go:364] duration metric: took 24.583µs to acquireMachinesLock for "kindnet-295000"
	I0729 16:44:38.877837    5787 start.go:93] Provisioning new machine with config: &{Name:kindnet-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:38.877868    5787 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:38.885521    5787 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:44:38.901385    5787 start.go:159] libmachine.API.Create for "kindnet-295000" (driver="qemu2")
	I0729 16:44:38.901410    5787 client.go:168] LocalClient.Create starting
	I0729 16:44:38.901474    5787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:44:38.901503    5787 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:38.901514    5787 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:38.901557    5787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:44:38.901579    5787 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:38.901586    5787 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:38.901936    5787 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:39.051161    5787 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:39.166128    5787 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:39.166133    5787 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:39.166336    5787 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2
	I0729 16:44:39.175633    5787 main.go:141] libmachine: STDOUT: 
	I0729 16:44:39.175652    5787 main.go:141] libmachine: STDERR: 
	I0729 16:44:39.175721    5787 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2 +20000M
	I0729 16:44:39.183736    5787 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:39.183814    5787 main.go:141] libmachine: STDERR: 
	I0729 16:44:39.183830    5787 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2
	I0729 16:44:39.183834    5787 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:39.183845    5787 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:39.183873    5787 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:7e:d7:ed:f9:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2
	I0729 16:44:39.185599    5787 main.go:141] libmachine: STDOUT: 
	I0729 16:44:39.185684    5787 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:39.185703    5787 client.go:171] duration metric: took 284.297291ms to LocalClient.Create
	I0729 16:44:41.187843    5787 start.go:128] duration metric: took 2.310016916s to createHost
	I0729 16:44:41.187929    5787 start.go:83] releasing machines lock for "kindnet-295000", held for 2.310154041s
	W0729 16:44:41.188017    5787 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:41.199804    5787 out.go:177] * Deleting "kindnet-295000" in qemu2 ...
	W0729 16:44:41.225340    5787 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:41.225365    5787 start.go:729] Will try again in 5 seconds ...
	I0729 16:44:46.226840    5787 start.go:360] acquireMachinesLock for kindnet-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:46.227348    5787 start.go:364] duration metric: took 397µs to acquireMachinesLock for "kindnet-295000"
	I0729 16:44:46.227415    5787 start.go:93] Provisioning new machine with config: &{Name:kindnet-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:46.227664    5787 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:46.235027    5787 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:44:46.279649    5787 start.go:159] libmachine.API.Create for "kindnet-295000" (driver="qemu2")
	I0729 16:44:46.279814    5787 client.go:168] LocalClient.Create starting
	I0729 16:44:46.279936    5787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:44:46.279999    5787 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:46.280018    5787 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:46.280076    5787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:44:46.280113    5787 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:46.280125    5787 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:46.280609    5787 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:46.437623    5787 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:46.594665    5787 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:46.594674    5787 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:46.594907    5787 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2
	I0729 16:44:46.604906    5787 main.go:141] libmachine: STDOUT: 
	I0729 16:44:46.605001    5787 main.go:141] libmachine: STDERR: 
	I0729 16:44:46.605049    5787 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2 +20000M
	I0729 16:44:46.613134    5787 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:46.613189    5787 main.go:141] libmachine: STDERR: 
	I0729 16:44:46.613203    5787 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2
	I0729 16:44:46.613207    5787 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:46.613219    5787 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:46.613253    5787 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:d2:35:ba:cc:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kindnet-295000/disk.qcow2
	I0729 16:44:46.614907    5787 main.go:141] libmachine: STDOUT: 
	I0729 16:44:46.614992    5787 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:46.615005    5787 client.go:171] duration metric: took 335.195791ms to LocalClient.Create
	I0729 16:44:48.617152    5787 start.go:128] duration metric: took 2.389516208s to createHost
	I0729 16:44:48.617236    5787 start.go:83] releasing machines lock for "kindnet-295000", held for 2.389937667s
	W0729 16:44:48.617605    5787 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:48.631374    5787 out.go:177] 
	W0729 16:44:48.635400    5787 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:44:48.635424    5787 out.go:239] * 
	* 
	W0729 16:44:48.637910    5787 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:44:48.650270    5787 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.852450542s)

                                                
                                                
-- stdout --
	* [flannel-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-295000" primary control-plane node in "flannel-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:44:50.979070    5902 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:44:50.979196    5902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:50.979200    5902 out.go:304] Setting ErrFile to fd 2...
	I0729 16:44:50.979202    5902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:44:50.979308    5902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:44:50.980332    5902 out.go:298] Setting JSON to false
	I0729 16:44:50.996286    5902 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4457,"bootTime":1722292233,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:44:50.996402    5902 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:44:51.001721    5902 out.go:177] * [flannel-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:44:51.009691    5902 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:44:51.009749    5902 notify.go:220] Checking for updates...
	I0729 16:44:51.015648    5902 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:44:51.018674    5902 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:44:51.022659    5902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:44:51.025635    5902 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:44:51.028719    5902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:44:51.032048    5902 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:44:51.032119    5902 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:44:51.032165    5902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:44:51.035662    5902 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:44:51.042672    5902 start.go:297] selected driver: qemu2
	I0729 16:44:51.042681    5902 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:44:51.042698    5902 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:44:51.044793    5902 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:44:51.049669    5902 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:44:51.052792    5902 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:44:51.052815    5902 cni.go:84] Creating CNI manager for "flannel"
	I0729 16:44:51.052824    5902 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0729 16:44:51.052849    5902 start.go:340] cluster config:
	{Name:flannel-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:44:51.056413    5902 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:44:51.060681    5902 out.go:177] * Starting "flannel-295000" primary control-plane node in "flannel-295000" cluster
	I0729 16:44:51.064718    5902 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:44:51.064734    5902 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:44:51.064744    5902 cache.go:56] Caching tarball of preloaded images
	I0729 16:44:51.064814    5902 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:44:51.064820    5902 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:44:51.064885    5902 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/flannel-295000/config.json ...
	I0729 16:44:51.064896    5902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/flannel-295000/config.json: {Name:mk83cfb92357575e950c0559b817f440a0ca7ce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:44:51.065091    5902 start.go:360] acquireMachinesLock for flannel-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:51.065122    5902 start.go:364] duration metric: took 25.666µs to acquireMachinesLock for "flannel-295000"
	I0729 16:44:51.065134    5902 start.go:93] Provisioning new machine with config: &{Name:flannel-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:51.065158    5902 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:51.072697    5902 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:44:51.089346    5902 start.go:159] libmachine.API.Create for "flannel-295000" (driver="qemu2")
	I0729 16:44:51.089375    5902 client.go:168] LocalClient.Create starting
	I0729 16:44:51.089444    5902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:44:51.089473    5902 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:51.089485    5902 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:51.089523    5902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:44:51.089545    5902 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:51.089553    5902 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:51.089907    5902 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:51.267180    5902 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:51.385405    5902 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:51.385417    5902 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:51.385664    5902 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2
	I0729 16:44:51.395555    5902 main.go:141] libmachine: STDOUT: 
	I0729 16:44:51.395576    5902 main.go:141] libmachine: STDERR: 
	I0729 16:44:51.395638    5902 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2 +20000M
	I0729 16:44:51.403755    5902 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:51.403767    5902 main.go:141] libmachine: STDERR: 
	I0729 16:44:51.403779    5902 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2
	I0729 16:44:51.403784    5902 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:51.403802    5902 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:51.403832    5902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:93:03:42:cb:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2
	I0729 16:44:51.405505    5902 main.go:141] libmachine: STDOUT: 
	I0729 16:44:51.405519    5902 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:51.405553    5902 client.go:171] duration metric: took 316.17125ms to LocalClient.Create
	I0729 16:44:53.407739    5902 start.go:128] duration metric: took 2.342623125s to createHost
	I0729 16:44:53.407805    5902 start.go:83] releasing machines lock for "flannel-295000", held for 2.342746708s
	W0729 16:44:53.407849    5902 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:53.418587    5902 out.go:177] * Deleting "flannel-295000" in qemu2 ...
	W0729 16:44:53.438196    5902 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:44:53.438214    5902 start.go:729] Will try again in 5 seconds ...
	I0729 16:44:58.440188    5902 start.go:360] acquireMachinesLock for flannel-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:44:58.440410    5902 start.go:364] duration metric: took 179.375µs to acquireMachinesLock for "flannel-295000"
	I0729 16:44:58.440464    5902 start.go:93] Provisioning new machine with config: &{Name:flannel-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:44:58.440606    5902 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:44:58.449918    5902 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:44:58.472399    5902 start.go:159] libmachine.API.Create for "flannel-295000" (driver="qemu2")
	I0729 16:44:58.472432    5902 client.go:168] LocalClient.Create starting
	I0729 16:44:58.472506    5902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:44:58.472542    5902 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:58.472551    5902 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:58.472594    5902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:44:58.472619    5902 main.go:141] libmachine: Decoding PEM data...
	I0729 16:44:58.472626    5902 main.go:141] libmachine: Parsing certificate...
	I0729 16:44:58.473039    5902 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:44:58.623282    5902 main.go:141] libmachine: Creating SSH key...
	I0729 16:44:58.741190    5902 main.go:141] libmachine: Creating Disk image...
	I0729 16:44:58.741201    5902 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:44:58.741442    5902 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2
	I0729 16:44:58.750490    5902 main.go:141] libmachine: STDOUT: 
	I0729 16:44:58.750509    5902 main.go:141] libmachine: STDERR: 
	I0729 16:44:58.750572    5902 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2 +20000M
	I0729 16:44:58.758288    5902 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:44:58.758303    5902 main.go:141] libmachine: STDERR: 
	I0729 16:44:58.758314    5902 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2
	I0729 16:44:58.758318    5902 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:44:58.758329    5902 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:44:58.758354    5902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:8e:1d:2f:f3:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/flannel-295000/disk.qcow2
	I0729 16:44:58.759924    5902 main.go:141] libmachine: STDOUT: 
	I0729 16:44:58.759939    5902 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:44:58.759952    5902 client.go:171] duration metric: took 287.524875ms to LocalClient.Create
	I0729 16:45:00.762073    5902 start.go:128] duration metric: took 2.321509042s to createHost
	I0729 16:45:00.762120    5902 start.go:83] releasing machines lock for "flannel-295000", held for 2.32176775s
	W0729 16:45:00.762532    5902 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:00.775283    5902 out.go:177] 
	W0729 16:45:00.779341    5902 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:45:00.779366    5902 out.go:239] * 
	* 
	W0729 16:45:00.782083    5902 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:45:00.790232    5902 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.905715958s)

                                                
                                                
-- stdout --
	* [enable-default-cni-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-295000" primary control-plane node in "enable-default-cni-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:45:03.175626    6022 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:45:03.175759    6022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:03.175767    6022 out.go:304] Setting ErrFile to fd 2...
	I0729 16:45:03.175770    6022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:03.175892    6022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:45:03.176950    6022 out.go:298] Setting JSON to false
	I0729 16:45:03.193181    6022 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4470,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:45:03.193282    6022 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:45:03.197500    6022 out.go:177] * [enable-default-cni-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:45:03.200559    6022 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:45:03.200632    6022 notify.go:220] Checking for updates...
	I0729 16:45:03.208518    6022 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:45:03.211545    6022 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:45:03.215544    6022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:45:03.219519    6022 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:45:03.222502    6022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:45:03.225839    6022 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:45:03.225908    6022 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:45:03.225956    6022 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:45:03.229530    6022 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:45:03.236505    6022 start.go:297] selected driver: qemu2
	I0729 16:45:03.236511    6022 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:45:03.236517    6022 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:45:03.238748    6022 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:45:03.242424    6022 out.go:177] * Automatically selected the socket_vmnet network
	E0729 16:45:03.246568    6022 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0729 16:45:03.246580    6022 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:45:03.246608    6022 cni.go:84] Creating CNI manager for "bridge"
	I0729 16:45:03.246615    6022 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:45:03.246653    6022 start.go:340] cluster config:
	{Name:enable-default-cni-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:45:03.250380    6022 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:03.258503    6022 out.go:177] * Starting "enable-default-cni-295000" primary control-plane node in "enable-default-cni-295000" cluster
	I0729 16:45:03.262529    6022 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:45:03.262543    6022 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:45:03.262549    6022 cache.go:56] Caching tarball of preloaded images
	I0729 16:45:03.262601    6022 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:45:03.262606    6022 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:45:03.262661    6022 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/enable-default-cni-295000/config.json ...
	I0729 16:45:03.262675    6022 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/enable-default-cni-295000/config.json: {Name:mk6feaa41f81ef960f74987eda9dd6c2d39f364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:45:03.262888    6022 start.go:360] acquireMachinesLock for enable-default-cni-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:03.262923    6022 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "enable-default-cni-295000"
	I0729 16:45:03.262935    6022 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:45:03.262973    6022 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:45:03.270474    6022 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:45:03.287470    6022 start.go:159] libmachine.API.Create for "enable-default-cni-295000" (driver="qemu2")
	I0729 16:45:03.287511    6022 client.go:168] LocalClient.Create starting
	I0729 16:45:03.287577    6022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:45:03.287608    6022 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:03.287627    6022 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:03.287668    6022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:45:03.287693    6022 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:03.287699    6022 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:03.288102    6022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:45:03.435823    6022 main.go:141] libmachine: Creating SSH key...
	I0729 16:45:03.490306    6022 main.go:141] libmachine: Creating Disk image...
	I0729 16:45:03.490312    6022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:45:03.490508    6022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2
	I0729 16:45:03.499572    6022 main.go:141] libmachine: STDOUT: 
	I0729 16:45:03.499589    6022 main.go:141] libmachine: STDERR: 
	I0729 16:45:03.499634    6022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2 +20000M
	I0729 16:45:03.507516    6022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:45:03.507529    6022 main.go:141] libmachine: STDERR: 
	I0729 16:45:03.507540    6022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2
	I0729 16:45:03.507549    6022 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:45:03.507570    6022 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:03.507594    6022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:a1:4c:2f:a3:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2
	I0729 16:45:03.509191    6022 main.go:141] libmachine: STDOUT: 
	I0729 16:45:03.509206    6022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:03.509228    6022 client.go:171] duration metric: took 221.717792ms to LocalClient.Create
	I0729 16:45:05.511373    6022 start.go:128] duration metric: took 2.248435167s to createHost
	I0729 16:45:05.511454    6022 start.go:83] releasing machines lock for "enable-default-cni-295000", held for 2.248588041s
	W0729 16:45:05.511540    6022 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:05.525706    6022 out.go:177] * Deleting "enable-default-cni-295000" in qemu2 ...
	W0729 16:45:05.550986    6022 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:05.551017    6022 start.go:729] Will try again in 5 seconds ...
	I0729 16:45:10.553188    6022 start.go:360] acquireMachinesLock for enable-default-cni-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:10.553801    6022 start.go:364] duration metric: took 494.542µs to acquireMachinesLock for "enable-default-cni-295000"
	I0729 16:45:10.553884    6022 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:45:10.554141    6022 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:45:10.559915    6022 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:45:10.610239    6022 start.go:159] libmachine.API.Create for "enable-default-cni-295000" (driver="qemu2")
	I0729 16:45:10.610312    6022 client.go:168] LocalClient.Create starting
	I0729 16:45:10.610418    6022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:45:10.610484    6022 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:10.610500    6022 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:10.610556    6022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:45:10.610601    6022 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:10.610611    6022 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:10.611242    6022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:45:10.771177    6022 main.go:141] libmachine: Creating SSH key...
	I0729 16:45:10.991656    6022 main.go:141] libmachine: Creating Disk image...
	I0729 16:45:10.991667    6022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:45:10.991904    6022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2
	I0729 16:45:11.001413    6022 main.go:141] libmachine: STDOUT: 
	I0729 16:45:11.001434    6022 main.go:141] libmachine: STDERR: 
	I0729 16:45:11.001499    6022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2 +20000M
	I0729 16:45:11.009593    6022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:45:11.009608    6022 main.go:141] libmachine: STDERR: 
	I0729 16:45:11.009618    6022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2
	I0729 16:45:11.009622    6022 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:45:11.009632    6022 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:11.009663    6022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:df:8d:8e:18:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/enable-default-cni-295000/disk.qcow2
	I0729 16:45:11.011303    6022 main.go:141] libmachine: STDOUT: 
	I0729 16:45:11.011321    6022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:11.011332    6022 client.go:171] duration metric: took 401.027083ms to LocalClient.Create
	I0729 16:45:13.013383    6022 start.go:128] duration metric: took 2.459260583s to createHost
	I0729 16:45:13.013414    6022 start.go:83] releasing machines lock for "enable-default-cni-295000", held for 2.459666125s
	W0729 16:45:13.013599    6022 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:13.024675    6022 out.go:177] 
	W0729 16:45:13.031772    6022 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:45:13.031785    6022 out.go:239] * 
	* 
	W0729 16:45:13.032760    6022 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:45:13.043717    6022 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.012271833s)

                                                
                                                
-- stdout --
	* [bridge-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-295000" primary control-plane node in "bridge-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:45:15.256474    6134 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:45:15.256631    6134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:15.256635    6134 out.go:304] Setting ErrFile to fd 2...
	I0729 16:45:15.256637    6134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:15.256766    6134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:45:15.257972    6134 out.go:298] Setting JSON to false
	I0729 16:45:15.274502    6134 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4482,"bootTime":1722292233,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:45:15.274567    6134 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:45:15.280883    6134 out.go:177] * [bridge-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:45:15.287777    6134 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:45:15.287900    6134 notify.go:220] Checking for updates...
	I0729 16:45:15.294704    6134 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:45:15.297810    6134 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:45:15.300825    6134 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:45:15.303799    6134 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:45:15.306747    6134 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:45:15.310120    6134 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:45:15.310184    6134 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:45:15.310235    6134 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:45:15.314745    6134 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:45:15.321827    6134 start.go:297] selected driver: qemu2
	I0729 16:45:15.321833    6134 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:45:15.321839    6134 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:45:15.323957    6134 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:45:15.327792    6134 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:45:15.330805    6134 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:45:15.330834    6134 cni.go:84] Creating CNI manager for "bridge"
	I0729 16:45:15.330841    6134 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:45:15.330875    6134 start.go:340] cluster config:
	{Name:bridge-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:45:15.334516    6134 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:15.340772    6134 out.go:177] * Starting "bridge-295000" primary control-plane node in "bridge-295000" cluster
	I0729 16:45:15.344812    6134 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:45:15.344828    6134 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:45:15.344845    6134 cache.go:56] Caching tarball of preloaded images
	I0729 16:45:15.344909    6134 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:45:15.344915    6134 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:45:15.344981    6134 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/bridge-295000/config.json ...
	I0729 16:45:15.344993    6134 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/bridge-295000/config.json: {Name:mk6d4730310d408440dafaffd7da7b8578ce03a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:45:15.345376    6134 start.go:360] acquireMachinesLock for bridge-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:15.345407    6134 start.go:364] duration metric: took 26µs to acquireMachinesLock for "bridge-295000"
	I0729 16:45:15.345418    6134 start.go:93] Provisioning new machine with config: &{Name:bridge-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:45:15.345442    6134 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:45:15.349788    6134 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:45:15.365760    6134 start.go:159] libmachine.API.Create for "bridge-295000" (driver="qemu2")
	I0729 16:45:15.365782    6134 client.go:168] LocalClient.Create starting
	I0729 16:45:15.365842    6134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:45:15.365870    6134 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:15.365878    6134 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:15.365916    6134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:45:15.365941    6134 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:15.365948    6134 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:15.366375    6134 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:45:15.514773    6134 main.go:141] libmachine: Creating SSH key...
	I0729 16:45:15.578225    6134 main.go:141] libmachine: Creating Disk image...
	I0729 16:45:15.578244    6134 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:45:15.578487    6134 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2
	I0729 16:45:15.589137    6134 main.go:141] libmachine: STDOUT: 
	I0729 16:45:15.589167    6134 main.go:141] libmachine: STDERR: 
	I0729 16:45:15.589248    6134 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2 +20000M
	I0729 16:45:15.598095    6134 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:45:15.598157    6134 main.go:141] libmachine: STDERR: 
	I0729 16:45:15.598177    6134 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2
	I0729 16:45:15.598185    6134 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:45:15.598199    6134 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:15.598230    6134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:cc:4e:d1:2a:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2
	I0729 16:45:15.600071    6134 main.go:141] libmachine: STDOUT: 
	I0729 16:45:15.600130    6134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:15.600149    6134 client.go:171] duration metric: took 234.371ms to LocalClient.Create
	I0729 16:45:17.602511    6134 start.go:128] duration metric: took 2.257073875s to createHost
	I0729 16:45:17.602617    6134 start.go:83] releasing machines lock for "bridge-295000", held for 2.257269334s
	W0729 16:45:17.602675    6134 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:17.610009    6134 out.go:177] * Deleting "bridge-295000" in qemu2 ...
	W0729 16:45:17.634088    6134 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:17.634110    6134 start.go:729] Will try again in 5 seconds ...
	I0729 16:45:22.636151    6134 start.go:360] acquireMachinesLock for bridge-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:22.636580    6134 start.go:364] duration metric: took 339.375µs to acquireMachinesLock for "bridge-295000"
	I0729 16:45:22.636689    6134 start.go:93] Provisioning new machine with config: &{Name:bridge-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:45:22.636973    6134 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:45:22.642683    6134 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:45:22.692413    6134 start.go:159] libmachine.API.Create for "bridge-295000" (driver="qemu2")
	I0729 16:45:22.692479    6134 client.go:168] LocalClient.Create starting
	I0729 16:45:22.692632    6134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:45:22.692729    6134 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:22.692745    6134 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:22.692801    6134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:45:22.692845    6134 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:22.692859    6134 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:22.693406    6134 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:45:22.851001    6134 main.go:141] libmachine: Creating SSH key...
	I0729 16:45:23.171444    6134 main.go:141] libmachine: Creating Disk image...
	I0729 16:45:23.171457    6134 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:45:23.171685    6134 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2
	I0729 16:45:23.181116    6134 main.go:141] libmachine: STDOUT: 
	I0729 16:45:23.181131    6134 main.go:141] libmachine: STDERR: 
	I0729 16:45:23.181179    6134 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2 +20000M
	I0729 16:45:23.189447    6134 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:45:23.189470    6134 main.go:141] libmachine: STDERR: 
	I0729 16:45:23.189485    6134 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2
	I0729 16:45:23.189490    6134 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:45:23.189500    6134 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:23.189537    6134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:a4:9a:83:e8:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/bridge-295000/disk.qcow2
	I0729 16:45:23.191183    6134 main.go:141] libmachine: STDOUT: 
	I0729 16:45:23.191197    6134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:23.191213    6134 client.go:171] duration metric: took 498.74175ms to LocalClient.Create
	I0729 16:45:25.193358    6134 start.go:128] duration metric: took 2.556424958s to createHost
	I0729 16:45:25.193430    6134 start.go:83] releasing machines lock for "bridge-295000", held for 2.556903041s
	W0729 16:45:25.193838    6134 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:25.208450    6134 out.go:177] 
	W0729 16:45:25.212504    6134 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:45:25.212526    6134 out.go:239] * 
	* 
	W0729 16:45:25.215556    6134 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:45:25.226375    6134 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-295000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.891610459s)

                                                
                                                
-- stdout --
	* [kubenet-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-295000" primary control-plane node in "kubenet-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:45:27.407164    6243 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:45:27.407294    6243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:27.407297    6243 out.go:304] Setting ErrFile to fd 2...
	I0729 16:45:27.407300    6243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:27.407438    6243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:45:27.408490    6243 out.go:298] Setting JSON to false
	I0729 16:45:27.425396    6243 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4494,"bootTime":1722292233,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:45:27.425461    6243 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:45:27.431388    6243 out.go:177] * [kubenet-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:45:27.439285    6243 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:45:27.439341    6243 notify.go:220] Checking for updates...
	I0729 16:45:27.446393    6243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:45:27.447852    6243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:45:27.452348    6243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:45:27.455411    6243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:45:27.456973    6243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:45:27.460782    6243 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:45:27.460849    6243 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:45:27.460900    6243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:45:27.465363    6243 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:45:27.470378    6243 start.go:297] selected driver: qemu2
	I0729 16:45:27.470388    6243 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:45:27.470396    6243 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:45:27.472735    6243 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:45:27.480384    6243 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:45:27.482001    6243 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:45:27.482036    6243 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0729 16:45:27.482067    6243 start.go:340] cluster config:
	{Name:kubenet-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:45:27.485891    6243 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:27.493436    6243 out.go:177] * Starting "kubenet-295000" primary control-plane node in "kubenet-295000" cluster
	I0729 16:45:27.497412    6243 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:45:27.497429    6243 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:45:27.497441    6243 cache.go:56] Caching tarball of preloaded images
	I0729 16:45:27.497502    6243 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:45:27.497509    6243 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:45:27.497571    6243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/kubenet-295000/config.json ...
	I0729 16:45:27.497583    6243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/kubenet-295000/config.json: {Name:mk91c562df550903cbcd1baa7d91048010975c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:45:27.497967    6243 start.go:360] acquireMachinesLock for kubenet-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:27.498000    6243 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "kubenet-295000"
	I0729 16:45:27.498011    6243 start.go:93] Provisioning new machine with config: &{Name:kubenet-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:45:27.498037    6243 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:45:27.506315    6243 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:45:27.524241    6243 start.go:159] libmachine.API.Create for "kubenet-295000" (driver="qemu2")
	I0729 16:45:27.524274    6243 client.go:168] LocalClient.Create starting
	I0729 16:45:27.524345    6243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:45:27.524378    6243 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:27.524389    6243 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:27.524429    6243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:45:27.524451    6243 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:27.524460    6243 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:27.524853    6243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:45:27.675509    6243 main.go:141] libmachine: Creating SSH key...
	I0729 16:45:27.862461    6243 main.go:141] libmachine: Creating Disk image...
	I0729 16:45:27.862476    6243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:45:27.862752    6243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2
	I0729 16:45:27.873025    6243 main.go:141] libmachine: STDOUT: 
	I0729 16:45:27.873051    6243 main.go:141] libmachine: STDERR: 
	I0729 16:45:27.873129    6243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2 +20000M
	I0729 16:45:27.881160    6243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:45:27.881177    6243 main.go:141] libmachine: STDERR: 
	I0729 16:45:27.881191    6243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2
	I0729 16:45:27.881197    6243 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:45:27.881211    6243 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:27.881239    6243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:9f:9e:db:e0:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2
	I0729 16:45:27.883195    6243 main.go:141] libmachine: STDOUT: 
	I0729 16:45:27.883280    6243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:27.883300    6243 client.go:171] duration metric: took 359.03125ms to LocalClient.Create
	I0729 16:45:29.885333    6243 start.go:128] duration metric: took 2.387356291s to createHost
	I0729 16:45:29.885358    6243 start.go:83] releasing machines lock for "kubenet-295000", held for 2.387424875s
	W0729 16:45:29.885422    6243 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:29.897613    6243 out.go:177] * Deleting "kubenet-295000" in qemu2 ...
	W0729 16:45:29.914764    6243 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:29.914776    6243 start.go:729] Will try again in 5 seconds ...
	I0729 16:45:34.916712    6243 start.go:360] acquireMachinesLock for kubenet-295000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:34.916912    6243 start.go:364] duration metric: took 169.208µs to acquireMachinesLock for "kubenet-295000"
	I0729 16:45:34.916957    6243 start.go:93] Provisioning new machine with config: &{Name:kubenet-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:45:34.917053    6243 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:45:34.924341    6243 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 16:45:34.941325    6243 start.go:159] libmachine.API.Create for "kubenet-295000" (driver="qemu2")
	I0729 16:45:34.941363    6243 client.go:168] LocalClient.Create starting
	I0729 16:45:34.941433    6243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:45:34.941463    6243 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:34.941473    6243 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:34.941513    6243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:45:34.941536    6243 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:34.941545    6243 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:34.941843    6243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:45:35.091142    6243 main.go:141] libmachine: Creating SSH key...
	I0729 16:45:35.200571    6243 main.go:141] libmachine: Creating Disk image...
	I0729 16:45:35.200587    6243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:45:35.200812    6243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2
	I0729 16:45:35.210267    6243 main.go:141] libmachine: STDOUT: 
	I0729 16:45:35.210285    6243 main.go:141] libmachine: STDERR: 
	I0729 16:45:35.210348    6243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2 +20000M
	I0729 16:45:35.218445    6243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:45:35.218461    6243 main.go:141] libmachine: STDERR: 
	I0729 16:45:35.218479    6243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2
	I0729 16:45:35.218484    6243 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:45:35.218495    6243 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:35.218525    6243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:da:b9:f9:d7:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/kubenet-295000/disk.qcow2
	I0729 16:45:35.220157    6243 main.go:141] libmachine: STDOUT: 
	I0729 16:45:35.220172    6243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:35.220185    6243 client.go:171] duration metric: took 278.825417ms to LocalClient.Create
	I0729 16:45:37.222192    6243 start.go:128] duration metric: took 2.3052015s to createHost
	I0729 16:45:37.222204    6243 start.go:83] releasing machines lock for "kubenet-295000", held for 2.305355542s
	W0729 16:45:37.222287    6243 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:37.240220    6243 out.go:177] 
	W0729 16:45:37.246332    6243 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:45:37.246355    6243 out.go:239] * 
	* 
	W0729 16:45:37.246794    6243 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:45:37.265222    6243 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-004000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-004000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.942798708s)

                                                
                                                
-- stdout --
	* [old-k8s-version-004000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-004000" primary control-plane node in "old-k8s-version-004000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-004000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:45:39.490466    6356 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:45:39.490619    6356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:39.490622    6356 out.go:304] Setting ErrFile to fd 2...
	I0729 16:45:39.490624    6356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:39.490772    6356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:45:39.491876    6356 out.go:298] Setting JSON to false
	I0729 16:45:39.508533    6356 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4506,"bootTime":1722292233,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:45:39.508607    6356 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:45:39.515016    6356 out.go:177] * [old-k8s-version-004000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:45:39.523000    6356 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:45:39.523045    6356 notify.go:220] Checking for updates...
	I0729 16:45:39.530900    6356 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:45:39.533947    6356 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:45:39.536918    6356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:45:39.539970    6356 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:45:39.542960    6356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:45:39.544816    6356 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:45:39.544890    6356 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:45:39.544939    6356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:45:39.548886    6356 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:45:39.555765    6356 start.go:297] selected driver: qemu2
	I0729 16:45:39.555772    6356 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:45:39.555778    6356 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:45:39.558416    6356 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:45:39.562919    6356 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:45:39.565994    6356 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:45:39.566008    6356 cni.go:84] Creating CNI manager for ""
	I0729 16:45:39.566014    6356 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:45:39.566034    6356 start.go:340] cluster config:
	{Name:old-k8s-version-004000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:45:39.569846    6356 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:39.576946    6356 out.go:177] * Starting "old-k8s-version-004000" primary control-plane node in "old-k8s-version-004000" cluster
	I0729 16:45:39.581020    6356 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:45:39.581034    6356 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:45:39.581052    6356 cache.go:56] Caching tarball of preloaded images
	I0729 16:45:39.581108    6356 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:45:39.581115    6356 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:45:39.581168    6356 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/old-k8s-version-004000/config.json ...
	I0729 16:45:39.581178    6356 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/old-k8s-version-004000/config.json: {Name:mk408ba19b764c0d462f0a10cc6504260befc4db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:45:39.581462    6356 start.go:360] acquireMachinesLock for old-k8s-version-004000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:39.581494    6356 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "old-k8s-version-004000"
	I0729 16:45:39.581505    6356 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-004000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:45:39.581537    6356 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:45:39.585014    6356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:45:39.600762    6356 start.go:159] libmachine.API.Create for "old-k8s-version-004000" (driver="qemu2")
	I0729 16:45:39.600792    6356 client.go:168] LocalClient.Create starting
	I0729 16:45:39.600861    6356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:45:39.600891    6356 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:39.600901    6356 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:39.600955    6356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:45:39.600978    6356 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:39.600986    6356 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:39.601327    6356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:45:39.750926    6356 main.go:141] libmachine: Creating SSH key...
	I0729 16:45:40.002951    6356 main.go:141] libmachine: Creating Disk image...
	I0729 16:45:40.002963    6356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:45:40.003222    6356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2
	I0729 16:45:40.013496    6356 main.go:141] libmachine: STDOUT: 
	I0729 16:45:40.013519    6356 main.go:141] libmachine: STDERR: 
	I0729 16:45:40.013586    6356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2 +20000M
	I0729 16:45:40.022204    6356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:45:40.022224    6356 main.go:141] libmachine: STDERR: 
	I0729 16:45:40.022239    6356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2
	I0729 16:45:40.022245    6356 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:45:40.022258    6356 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:40.022294    6356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:54:d6:6a:a7:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2
	I0729 16:45:40.024139    6356 main.go:141] libmachine: STDOUT: 
	I0729 16:45:40.024155    6356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:40.024175    6356 client.go:171] duration metric: took 423.391875ms to LocalClient.Create
	I0729 16:45:42.026339    6356 start.go:128] duration metric: took 2.444852125s to createHost
	I0729 16:45:42.026400    6356 start.go:83] releasing machines lock for "old-k8s-version-004000", held for 2.444973s
	W0729 16:45:42.026463    6356 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:42.036758    6356 out.go:177] * Deleting "old-k8s-version-004000" in qemu2 ...
	W0729 16:45:42.055826    6356 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:42.055844    6356 start.go:729] Will try again in 5 seconds ...
	I0729 16:45:47.057871    6356 start.go:360] acquireMachinesLock for old-k8s-version-004000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:47.058288    6356 start.go:364] duration metric: took 354.667µs to acquireMachinesLock for "old-k8s-version-004000"
	I0729 16:45:47.058431    6356 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-004000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:45:47.058589    6356 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:45:47.068228    6356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:45:47.119466    6356 start.go:159] libmachine.API.Create for "old-k8s-version-004000" (driver="qemu2")
	I0729 16:45:47.119522    6356 client.go:168] LocalClient.Create starting
	I0729 16:45:47.119640    6356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:45:47.119710    6356 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:47.119727    6356 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:47.119817    6356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:45:47.119861    6356 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:47.119876    6356 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:47.120412    6356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:45:47.280444    6356 main.go:141] libmachine: Creating SSH key...
	I0729 16:45:47.349893    6356 main.go:141] libmachine: Creating Disk image...
	I0729 16:45:47.349904    6356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:45:47.350132    6356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2
	I0729 16:45:47.359573    6356 main.go:141] libmachine: STDOUT: 
	I0729 16:45:47.359590    6356 main.go:141] libmachine: STDERR: 
	I0729 16:45:47.359651    6356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2 +20000M
	I0729 16:45:47.367876    6356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:45:47.367889    6356 main.go:141] libmachine: STDERR: 
	I0729 16:45:47.367899    6356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2
	I0729 16:45:47.367905    6356 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:45:47.367918    6356 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:47.367945    6356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:2b:2d:30:a3:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2
	I0729 16:45:47.369685    6356 main.go:141] libmachine: STDOUT: 
	I0729 16:45:47.369698    6356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:47.369710    6356 client.go:171] duration metric: took 250.189209ms to LocalClient.Create
	I0729 16:45:49.371751    6356 start.go:128] duration metric: took 2.313212875s to createHost
	I0729 16:45:49.371780    6356 start.go:83] releasing machines lock for "old-k8s-version-004000", held for 2.313543709s
	W0729 16:45:49.371987    6356 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-004000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-004000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:49.380807    6356 out.go:177] 
	W0729 16:45:49.384774    6356 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:45:49.385005    6356 out.go:239] * 
	* 
	W0729 16:45:49.386123    6356 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:45:49.395800    6356 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-004000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (37.393333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-004000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-004000 create -f testdata/busybox.yaml: exit status 1 (27.226416ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-004000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-004000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (28.758333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-004000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (28.708083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-004000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-004000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-004000 describe deploy/metrics-server -n kube-system: exit status 1 (26.644291ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-004000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-004000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (29.309084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-004000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-004000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.176430917s)

                                                
                                                
-- stdout --
	* [old-k8s-version-004000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-004000" primary control-plane node in "old-k8s-version-004000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-004000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-004000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:45:51.590832    6402 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:45:51.590962    6402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:51.590966    6402 out.go:304] Setting ErrFile to fd 2...
	I0729 16:45:51.590968    6402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:51.591090    6402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:45:51.592158    6402 out.go:298] Setting JSON to false
	I0729 16:45:51.608392    6402 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4518,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:45:51.608469    6402 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:45:51.613328    6402 out.go:177] * [old-k8s-version-004000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:45:51.621296    6402 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:45:51.621335    6402 notify.go:220] Checking for updates...
	I0729 16:45:51.627331    6402 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:45:51.630313    6402 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:45:51.633383    6402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:45:51.634843    6402 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:45:51.638331    6402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:45:51.641575    6402 config.go:182] Loaded profile config "old-k8s-version-004000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 16:45:51.645307    6402 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 16:45:51.648329    6402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:45:51.652386    6402 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:45:51.659239    6402 start.go:297] selected driver: qemu2
	I0729 16:45:51.659246    6402 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-004000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:45:51.659298    6402 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:45:51.661634    6402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:45:51.661658    6402 cni.go:84] Creating CNI manager for ""
	I0729 16:45:51.661666    6402 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 16:45:51.661695    6402 start.go:340] cluster config:
	{Name:old-k8s-version-004000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-004000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:45:51.665198    6402 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:51.673316    6402 out.go:177] * Starting "old-k8s-version-004000" primary control-plane node in "old-k8s-version-004000" cluster
	I0729 16:45:51.677379    6402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 16:45:51.677393    6402 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 16:45:51.677402    6402 cache.go:56] Caching tarball of preloaded images
	I0729 16:45:51.677461    6402 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:45:51.677470    6402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 16:45:51.677525    6402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/old-k8s-version-004000/config.json ...
	I0729 16:45:51.678028    6402 start.go:360] acquireMachinesLock for old-k8s-version-004000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:51.678056    6402 start.go:364] duration metric: took 21.25µs to acquireMachinesLock for "old-k8s-version-004000"
	I0729 16:45:51.678065    6402 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:45:51.678071    6402 fix.go:54] fixHost starting: 
	I0729 16:45:51.678182    6402 fix.go:112] recreateIfNeeded on old-k8s-version-004000: state=Stopped err=<nil>
	W0729 16:45:51.678190    6402 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:45:51.682258    6402 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-004000" ...
	I0729 16:45:51.690309    6402 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:51.690360    6402 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:2b:2d:30:a3:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2
	I0729 16:45:51.692326    6402 main.go:141] libmachine: STDOUT: 
	I0729 16:45:51.692345    6402 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:51.692373    6402 fix.go:56] duration metric: took 14.301417ms for fixHost
	I0729 16:45:51.692378    6402 start.go:83] releasing machines lock for "old-k8s-version-004000", held for 14.318625ms
	W0729 16:45:51.692383    6402 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:45:51.692417    6402 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:51.692421    6402 start.go:729] Will try again in 5 seconds ...
	I0729 16:45:56.693972    6402 start.go:360] acquireMachinesLock for old-k8s-version-004000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:56.694134    6402 start.go:364] duration metric: took 124.375µs to acquireMachinesLock for "old-k8s-version-004000"
	I0729 16:45:56.694171    6402 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:45:56.694177    6402 fix.go:54] fixHost starting: 
	I0729 16:45:56.694366    6402 fix.go:112] recreateIfNeeded on old-k8s-version-004000: state=Stopped err=<nil>
	W0729 16:45:56.694372    6402 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:45:56.701577    6402 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-004000" ...
	I0729 16:45:56.705493    6402 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:56.705554    6402 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:2b:2d:30:a3:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/old-k8s-version-004000/disk.qcow2
	I0729 16:45:56.708249    6402 main.go:141] libmachine: STDOUT: 
	I0729 16:45:56.708281    6402 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:56.708309    6402 fix.go:56] duration metric: took 14.133292ms for fixHost
	I0729 16:45:56.708314    6402 start.go:83] releasing machines lock for "old-k8s-version-004000", held for 14.173375ms
	W0729 16:45:56.708381    6402 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-004000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-004000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:56.716352    6402 out.go:177] 
	W0729 16:45:56.720463    6402 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:45:56.720469    6402 out.go:239] * 
	* 
	W0729 16:45:56.720978    6402 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:45:56.732496    6402 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-004000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (30.34675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-004000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (29.065875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-004000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-004000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-004000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.054917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-004000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-004000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (29.265958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-004000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (29.476625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-004000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-004000 --alsologtostderr -v=1: exit status 83 (41.714667ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-004000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-004000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:45:56.954473    6425 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:45:56.955476    6425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:56.955480    6425 out.go:304] Setting ErrFile to fd 2...
	I0729 16:45:56.955483    6425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:56.955626    6425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:45:56.955835    6425 out.go:298] Setting JSON to false
	I0729 16:45:56.955842    6425 mustload.go:65] Loading cluster: old-k8s-version-004000
	I0729 16:45:56.956023    6425 config.go:182] Loaded profile config "old-k8s-version-004000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 16:45:56.960865    6425 out.go:177] * The control-plane node old-k8s-version-004000 host is not running: state=Stopped
	I0729 16:45:56.963787    6425 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-004000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-004000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (28.808167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-004000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (29.306334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-814000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-814000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.845120333s)

                                                
                                                
-- stdout --
	* [no-preload-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-814000" primary control-plane node in "no-preload-814000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-814000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:45:57.270284    6442 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:45:57.270421    6442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:57.270425    6442 out.go:304] Setting ErrFile to fd 2...
	I0729 16:45:57.270427    6442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:45:57.270574    6442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:45:57.271621    6442 out.go:298] Setting JSON to false
	I0729 16:45:57.287581    6442 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4524,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:45:57.287656    6442 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:45:57.292806    6442 out.go:177] * [no-preload-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:45:57.297710    6442 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:45:57.297735    6442 notify.go:220] Checking for updates...
	I0729 16:45:57.304734    6442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:45:57.307685    6442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:45:57.310776    6442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:45:57.312210    6442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:45:57.315783    6442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:45:57.319043    6442 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:45:57.319100    6442 config.go:182] Loaded profile config "stopped-upgrade-170000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 16:45:57.319154    6442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:45:57.320786    6442 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:45:57.327757    6442 start.go:297] selected driver: qemu2
	I0729 16:45:57.327763    6442 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:45:57.327768    6442 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:45:57.330085    6442 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:45:57.332731    6442 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:45:57.336778    6442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:45:57.336810    6442 cni.go:84] Creating CNI manager for ""
	I0729 16:45:57.336817    6442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:45:57.336819    6442 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:45:57.336842    6442 start.go:340] cluster config:
	{Name:no-preload-814000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:45:57.340314    6442 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:57.348707    6442 out.go:177] * Starting "no-preload-814000" primary control-plane node in "no-preload-814000" cluster
	I0729 16:45:57.352737    6442 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:45:57.352797    6442 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/no-preload-814000/config.json ...
	I0729 16:45:57.352811    6442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/no-preload-814000/config.json: {Name:mk3e8ba021c564801f9638ed0616c4ce6fade23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:45:57.352815    6442 cache.go:107] acquiring lock: {Name:mk398b2a2c30354278149aa4f8fa41608d46d5dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:57.352818    6442 cache.go:107] acquiring lock: {Name:mk7a39edfa8686017f7658af3169b1d2c77ef004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:57.352851    6442 cache.go:107] acquiring lock: {Name:mk22b7784272f4598406c23bd32e64404a80fa4a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:57.352869    6442 cache.go:115] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 16:45:57.352874    6442 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 61.042µs
	I0729 16:45:57.352880    6442 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 16:45:57.352885    6442 cache.go:107] acquiring lock: {Name:mk8b400e2ece8ed210065904fe208afeabf4653c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:57.352976    6442 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 16:45:57.353000    6442 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 16:45:57.353014    6442 cache.go:107] acquiring lock: {Name:mkc46b99689b8d4a8fe4330aef93a086809d09fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:57.353001    6442 cache.go:107] acquiring lock: {Name:mkb347a8ae3bf75f891fa11b73ee46333a5fb6de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:57.353000    6442 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 16:45:57.353065    6442 cache.go:107] acquiring lock: {Name:mk6e319e94b9693c8768b60652e498345a917b0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:57.353067    6442 cache.go:107] acquiring lock: {Name:mk108212deaa15223269e734d6dba1b33d50946e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:45:57.353131    6442 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 16:45:57.353169    6442 start.go:360] acquireMachinesLock for no-preload-814000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:45:57.353225    6442 start.go:364] duration metric: took 51.292µs to acquireMachinesLock for "no-preload-814000"
	I0729 16:45:57.353232    6442 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 16:45:57.353241    6442 start.go:93] Provisioning new machine with config: &{Name:no-preload-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:45:57.353284    6442 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:45:57.353321    6442 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 16:45:57.353348    6442 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 16:45:57.360730    6442 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:45:57.365453    6442 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 16:45:57.365574    6442 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 16:45:57.365732    6442 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 16:45:57.365938    6442 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 16:45:57.367702    6442 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 16:45:57.367728    6442 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 16:45:57.367832    6442 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 16:45:57.376445    6442 start.go:159] libmachine.API.Create for "no-preload-814000" (driver="qemu2")
	I0729 16:45:57.376464    6442 client.go:168] LocalClient.Create starting
	I0729 16:45:57.376524    6442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:45:57.376554    6442 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:57.376565    6442 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:57.376613    6442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:45:57.376638    6442 main.go:141] libmachine: Decoding PEM data...
	I0729 16:45:57.376648    6442 main.go:141] libmachine: Parsing certificate...
	I0729 16:45:57.377018    6442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:45:57.530558    6442 main.go:141] libmachine: Creating SSH key...
	I0729 16:45:57.670107    6442 main.go:141] libmachine: Creating Disk image...
	I0729 16:45:57.670124    6442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:45:57.670365    6442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2
	I0729 16:45:57.679819    6442 main.go:141] libmachine: STDOUT: 
	I0729 16:45:57.679836    6442 main.go:141] libmachine: STDERR: 
	I0729 16:45:57.679882    6442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2 +20000M
	I0729 16:45:57.688198    6442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:45:57.688212    6442 main.go:141] libmachine: STDERR: 
	I0729 16:45:57.688226    6442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2
	I0729 16:45:57.688230    6442 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:45:57.688240    6442 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:45:57.688275    6442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:08:dd:10:7e:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2
	I0729 16:45:57.690093    6442 main.go:141] libmachine: STDOUT: 
	I0729 16:45:57.690108    6442 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:45:57.690132    6442 client.go:171] duration metric: took 313.673875ms to LocalClient.Create
	I0729 16:45:57.759574    6442 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 16:45:57.766108    6442 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 16:45:57.787968    6442 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0729 16:45:57.802764    6442 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0729 16:45:57.849689    6442 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 16:45:57.855605    6442 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 16:45:57.862092    6442 cache.go:162] opening:  /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 16:45:57.971181    6442 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 16:45:57.971198    6442 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 618.252209ms
	I0729 16:45:57.971213    6442 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 16:45:59.690197    6442 start.go:128] duration metric: took 2.336973083s to createHost
	I0729 16:45:59.690222    6442 start.go:83] releasing machines lock for "no-preload-814000", held for 2.337061834s
	W0729 16:45:59.690238    6442 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:59.699991    6442 out.go:177] * Deleting "no-preload-814000" in qemu2 ...
	W0729 16:45:59.715700    6442 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:45:59.715716    6442 start.go:729] Will try again in 5 seconds ...
	I0729 16:46:00.006896    6442 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 16:46:00.006928    6442 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.653967458s
	I0729 16:46:00.006946    6442 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 16:46:00.164596    6442 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 16:46:00.164647    6442 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 2.811843833s
	I0729 16:46:00.164665    6442 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 16:46:01.505779    6442 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 16:46:01.505801    6442 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.153111833s
	I0729 16:46:01.505819    6442 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 16:46:01.981895    6442 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 16:46:01.981912    6442 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.629015958s
	I0729 16:46:01.981919    6442 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 16:46:02.489653    6442 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 16:46:02.489686    6442 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 5.137009625s
	I0729 16:46:02.489700    6442 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 16:46:04.715690    6442 start.go:360] acquireMachinesLock for no-preload-814000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:04.715916    6442 start.go:364] duration metric: took 196.791µs to acquireMachinesLock for "no-preload-814000"
	I0729 16:46:04.715984    6442 start.go:93] Provisioning new machine with config: &{Name:no-preload-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:46:04.716099    6442 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:46:04.724457    6442 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:46:04.756430    6442 start.go:159] libmachine.API.Create for "no-preload-814000" (driver="qemu2")
	I0729 16:46:04.756475    6442 client.go:168] LocalClient.Create starting
	I0729 16:46:04.756580    6442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:46:04.756639    6442 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:04.756658    6442 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:04.756735    6442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:46:04.756774    6442 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:04.756786    6442 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:04.757226    6442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:46:04.910036    6442 main.go:141] libmachine: Creating SSH key...
	I0729 16:46:05.021803    6442 main.go:141] libmachine: Creating Disk image...
	I0729 16:46:05.021809    6442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:46:05.022032    6442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2
	I0729 16:46:05.031385    6442 main.go:141] libmachine: STDOUT: 
	I0729 16:46:05.031402    6442 main.go:141] libmachine: STDERR: 
	I0729 16:46:05.031457    6442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2 +20000M
	I0729 16:46:05.039733    6442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:46:05.039749    6442 main.go:141] libmachine: STDERR: 
	I0729 16:46:05.039759    6442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2
	I0729 16:46:05.039765    6442 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:46:05.039776    6442 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:05.039812    6442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d9:e7:79:69:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2
	I0729 16:46:05.041597    6442 main.go:141] libmachine: STDOUT: 
	I0729 16:46:05.041616    6442 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:05.041628    6442 client.go:171] duration metric: took 285.155416ms to LocalClient.Create
	I0729 16:46:05.835589    6442 cache.go:157] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 16:46:05.835639    6442 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 8.482876625s
	I0729 16:46:05.835658    6442 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 16:46:05.835696    6442 cache.go:87] Successfully saved all images to host disk.
	I0729 16:46:07.043819    6442 start.go:128] duration metric: took 2.327750125s to createHost
	I0729 16:46:07.043912    6442 start.go:83] releasing machines lock for "no-preload-814000", held for 2.328050375s
	W0729 16:46:07.044187    6442 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-814000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-814000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:07.054734    6442 out.go:177] 
	W0729 16:46:07.062977    6442 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:07.063034    6442 out.go:239] * 
	* 
	W0729 16:46:07.067074    6442 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:46:07.073698    6442 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-814000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (67.214792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-814000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-814000 create -f testdata/busybox.yaml: exit status 1 (31.539584ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-814000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-814000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (28.852083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-814000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (29.278375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-814000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-814000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-814000 describe deploy/metrics-server -n kube-system: exit status 1 (27.223ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-814000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-814000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (29.453416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-449000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-449000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (10.074278083s)

                                                
                                                
-- stdout --
	* [embed-certs-449000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-449000" primary control-plane node in "embed-certs-449000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-449000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:10.253322    6515 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:10.253440    6515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:10.253443    6515 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:10.253446    6515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:10.253582    6515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:10.254640    6515 out.go:298] Setting JSON to false
	I0729 16:46:10.270750    6515 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4537,"bootTime":1722292233,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:46:10.270852    6515 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:46:10.275421    6515 out.go:177] * [embed-certs-449000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:46:10.283289    6515 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:46:10.283328    6515 notify.go:220] Checking for updates...
	I0729 16:46:10.291381    6515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:46:10.299327    6515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:46:10.306362    6515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:46:10.310402    6515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:46:10.313381    6515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:46:10.317707    6515 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:46:10.317782    6515 config.go:182] Loaded profile config "no-preload-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 16:46:10.317834    6515 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:46:10.322321    6515 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:46:10.327323    6515 start.go:297] selected driver: qemu2
	I0729 16:46:10.327330    6515 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:46:10.327338    6515 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:46:10.329805    6515 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:46:10.332432    6515 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:46:10.335567    6515 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:46:10.335583    6515 cni.go:84] Creating CNI manager for ""
	I0729 16:46:10.335594    6515 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:10.335605    6515 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:46:10.335641    6515 start.go:340] cluster config:
	{Name:embed-certs-449000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:10.339792    6515 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:10.352361    6515 out.go:177] * Starting "embed-certs-449000" primary control-plane node in "embed-certs-449000" cluster
	I0729 16:46:10.356352    6515 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:46:10.356371    6515 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:46:10.356390    6515 cache.go:56] Caching tarball of preloaded images
	I0729 16:46:10.356459    6515 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:46:10.356473    6515 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:46:10.356549    6515 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/embed-certs-449000/config.json ...
	I0729 16:46:10.356560    6515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/embed-certs-449000/config.json: {Name:mk321a1fee08ae713c10fa647f7b88a470e8017c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:10.356974    6515 start.go:360] acquireMachinesLock for embed-certs-449000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:10.357012    6515 start.go:364] duration metric: took 30.917µs to acquireMachinesLock for "embed-certs-449000"
	I0729 16:46:10.357024    6515 start.go:93] Provisioning new machine with config: &{Name:embed-certs-449000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:46:10.357066    6515 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:46:10.365417    6515 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:46:10.383505    6515 start.go:159] libmachine.API.Create for "embed-certs-449000" (driver="qemu2")
	I0729 16:46:10.383540    6515 client.go:168] LocalClient.Create starting
	I0729 16:46:10.383609    6515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:46:10.383640    6515 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:10.383658    6515 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:10.383701    6515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:46:10.383726    6515 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:10.383736    6515 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:10.384138    6515 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:46:10.701045    6515 main.go:141] libmachine: Creating SSH key...
	I0729 16:46:10.849211    6515 main.go:141] libmachine: Creating Disk image...
	I0729 16:46:10.849219    6515 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:46:10.849412    6515 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2
	I0729 16:46:10.858943    6515 main.go:141] libmachine: STDOUT: 
	I0729 16:46:10.858960    6515 main.go:141] libmachine: STDERR: 
	I0729 16:46:10.859005    6515 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2 +20000M
	I0729 16:46:10.866812    6515 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:46:10.866826    6515 main.go:141] libmachine: STDERR: 
	I0729 16:46:10.866839    6515 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2
	I0729 16:46:10.866842    6515 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:46:10.866856    6515 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:10.866888    6515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:60:ba:83:f5:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2
	I0729 16:46:10.868537    6515 main.go:141] libmachine: STDOUT: 
	I0729 16:46:10.868551    6515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:10.868568    6515 client.go:171] duration metric: took 485.039833ms to LocalClient.Create
	I0729 16:46:12.870743    6515 start.go:128] duration metric: took 2.513719459s to createHost
	I0729 16:46:12.870850    6515 start.go:83] releasing machines lock for "embed-certs-449000", held for 2.513903334s
	W0729 16:46:12.870908    6515 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:12.886572    6515 out.go:177] * Deleting "embed-certs-449000" in qemu2 ...
	W0729 16:46:12.914839    6515 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:12.914871    6515 start.go:729] Will try again in 5 seconds ...
	I0729 16:46:17.916863    6515 start.go:360] acquireMachinesLock for embed-certs-449000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:17.927766    6515 start.go:364] duration metric: took 10.810667ms to acquireMachinesLock for "embed-certs-449000"
	I0729 16:46:17.927856    6515 start.go:93] Provisioning new machine with config: &{Name:embed-certs-449000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:46:17.928240    6515 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:46:17.938690    6515 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:46:17.989648    6515 start.go:159] libmachine.API.Create for "embed-certs-449000" (driver="qemu2")
	I0729 16:46:17.989710    6515 client.go:168] LocalClient.Create starting
	I0729 16:46:17.989808    6515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:46:17.989875    6515 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:17.989894    6515 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:17.989960    6515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:46:17.990004    6515 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:17.990015    6515 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:17.990574    6515 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:46:18.154271    6515 main.go:141] libmachine: Creating SSH key...
	I0729 16:46:18.222474    6515 main.go:141] libmachine: Creating Disk image...
	I0729 16:46:18.222487    6515 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:46:18.222713    6515 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2
	I0729 16:46:18.241829    6515 main.go:141] libmachine: STDOUT: 
	I0729 16:46:18.241852    6515 main.go:141] libmachine: STDERR: 
	I0729 16:46:18.241904    6515 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2 +20000M
	I0729 16:46:18.250505    6515 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:46:18.250523    6515 main.go:141] libmachine: STDERR: 
	I0729 16:46:18.250542    6515 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2
	I0729 16:46:18.250547    6515 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:46:18.250559    6515 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:18.250600    6515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d4:f6:50:ad:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2
	I0729 16:46:18.252321    6515 main.go:141] libmachine: STDOUT: 
	I0729 16:46:18.252338    6515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:18.252352    6515 client.go:171] duration metric: took 262.644958ms to LocalClient.Create
	I0729 16:46:20.254462    6515 start.go:128] duration metric: took 2.326256458s to createHost
	I0729 16:46:20.254637    6515 start.go:83] releasing machines lock for "embed-certs-449000", held for 2.326904875s
	W0729 16:46:20.254944    6515 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-449000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-449000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:20.272724    6515 out.go:177] 
	W0729 16:46:20.276834    6515 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:20.276869    6515 out.go:239] * 
	* 
	W0729 16:46:20.279294    6515 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:46:20.289766    6515 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-449000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (50.773667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-814000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-814000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (7.461947417s)

                                                
                                                
-- stdout --
	* [no-preload-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-814000" primary control-plane node in "no-preload-814000" cluster
	* Restarting existing qemu2 VM for "no-preload-814000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-814000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:10.537958    6528 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:10.538108    6528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:10.538111    6528 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:10.538114    6528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:10.538260    6528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:10.539579    6528 out.go:298] Setting JSON to false
	I0729 16:46:10.559040    6528 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4537,"bootTime":1722292233,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:46:10.559163    6528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:46:10.585629    6528 out.go:177] * [no-preload-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:46:10.593488    6528 notify.go:220] Checking for updates...
	I0729 16:46:10.598303    6528 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:46:10.609366    6528 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:46:10.617154    6528 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:46:10.625376    6528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:46:10.632320    6528 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:46:10.641390    6528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:46:10.645687    6528 config.go:182] Loaded profile config "no-preload-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 16:46:10.646015    6528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:46:10.650412    6528 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:46:10.658436    6528 start.go:297] selected driver: qemu2
	I0729 16:46:10.658447    6528 start.go:901] validating driver "qemu2" against &{Name:no-preload-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:10.658550    6528 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:46:10.662146    6528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:46:10.662210    6528 cni.go:84] Creating CNI manager for ""
	I0729 16:46:10.662221    6528 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:10.662249    6528 start.go:340] cluster config:
	{Name:no-preload-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-814000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:10.667517    6528 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:10.676282    6528 out.go:177] * Starting "no-preload-814000" primary control-plane node in "no-preload-814000" cluster
	I0729 16:46:10.680397    6528 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:46:10.680547    6528 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/no-preload-814000/config.json ...
	I0729 16:46:10.680554    6528 cache.go:107] acquiring lock: {Name:mk398b2a2c30354278149aa4f8fa41608d46d5dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:10.680564    6528 cache.go:107] acquiring lock: {Name:mk7a39edfa8686017f7658af3169b1d2c77ef004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:10.680595    6528 cache.go:107] acquiring lock: {Name:mkb347a8ae3bf75f891fa11b73ee46333a5fb6de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:10.680637    6528 cache.go:115] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 16:46:10.680648    6528 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.542µs
	I0729 16:46:10.680654    6528 cache.go:115] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 16:46:10.680659    6528 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 16:46:10.680662    6528 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 115.042µs
	I0729 16:46:10.680668    6528 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 16:46:10.680673    6528 cache.go:115] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 16:46:10.680670    6528 cache.go:107] acquiring lock: {Name:mkc46b99689b8d4a8fe4330aef93a086809d09fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:10.680680    6528 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 97.042µs
	I0729 16:46:10.680678    6528 cache.go:107] acquiring lock: {Name:mk8b400e2ece8ed210065904fe208afeabf4653c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:10.680685    6528 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 16:46:10.680694    6528 cache.go:107] acquiring lock: {Name:mk22b7784272f4598406c23bd32e64404a80fa4a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:10.680722    6528 cache.go:115] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 16:46:10.680731    6528 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 61.667µs
	I0729 16:46:10.680735    6528 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 16:46:10.680694    6528 cache.go:107] acquiring lock: {Name:mk6e319e94b9693c8768b60652e498345a917b0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:10.680742    6528 cache.go:115] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 16:46:10.680753    6528 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 75.625µs
	I0729 16:46:10.680759    6528 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 16:46:10.680758    6528 cache.go:107] acquiring lock: {Name:mk108212deaa15223269e734d6dba1b33d50946e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:10.680765    6528 cache.go:115] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 16:46:10.680812    6528 cache.go:115] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 16:46:10.680778    6528 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 107.083µs
	I0729 16:46:10.680845    6528 cache.go:115] /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 16:46:10.680849    6528 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 16:46:10.680818    6528 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 84.708µs
	I0729 16:46:10.680854    6528 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 157.125µs
	I0729 16:46:10.680859    6528 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 16:46:10.680861    6528 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 16:46:10.680866    6528 cache.go:87] Successfully saved all images to host disk.
	I0729 16:46:10.681123    6528 start.go:360] acquireMachinesLock for no-preload-814000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:12.871055    6528 start.go:364] duration metric: took 2.189898125s to acquireMachinesLock for "no-preload-814000"
	I0729 16:46:12.871144    6528 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:46:12.871184    6528 fix.go:54] fixHost starting: 
	I0729 16:46:12.871842    6528 fix.go:112] recreateIfNeeded on no-preload-814000: state=Stopped err=<nil>
	W0729 16:46:12.871889    6528 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:46:12.881528    6528 out.go:177] * Restarting existing qemu2 VM for "no-preload-814000" ...
	I0729 16:46:12.889597    6528 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:12.889834    6528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d9:e7:79:69:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2
	I0729 16:46:12.900287    6528 main.go:141] libmachine: STDOUT: 
	I0729 16:46:12.900352    6528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:12.900449    6528 fix.go:56] duration metric: took 29.269292ms for fixHost
	I0729 16:46:12.900471    6528 start.go:83] releasing machines lock for "no-preload-814000", held for 29.377083ms
	W0729 16:46:12.900496    6528 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:12.900642    6528 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:12.900659    6528 start.go:729] Will try again in 5 seconds ...
	I0729 16:46:17.902759    6528 start.go:360] acquireMachinesLock for no-preload-814000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:17.903289    6528 start.go:364] duration metric: took 412.333µs to acquireMachinesLock for "no-preload-814000"
	I0729 16:46:17.903436    6528 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:46:17.903457    6528 fix.go:54] fixHost starting: 
	I0729 16:46:17.904243    6528 fix.go:112] recreateIfNeeded on no-preload-814000: state=Stopped err=<nil>
	W0729 16:46:17.904274    6528 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:46:17.913771    6528 out.go:177] * Restarting existing qemu2 VM for "no-preload-814000" ...
	I0729 16:46:17.917611    6528 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:17.917891    6528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d9:e7:79:69:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/no-preload-814000/disk.qcow2
	I0729 16:46:17.927513    6528 main.go:141] libmachine: STDOUT: 
	I0729 16:46:17.927578    6528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:17.927663    6528 fix.go:56] duration metric: took 24.20775ms for fixHost
	I0729 16:46:17.927685    6528 start.go:83] releasing machines lock for "no-preload-814000", held for 24.374416ms
	W0729 16:46:17.927896    6528 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-814000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-814000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:17.938692    6528 out.go:177] 
	W0729 16:46:17.942903    6528 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:17.942936    6528 out.go:239] * 
	* 
	W0729 16:46:17.945570    6528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:46:17.955732    6528 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-814000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (51.870667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-814000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (34.3055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-814000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-814000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-814000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.915459ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-814000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-814000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (33.725042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-814000 image list --format=json
E0729 16:46:18.129005    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (30.857042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-814000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-814000 --alsologtostderr -v=1: exit status 83 (46.65875ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-814000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:18.229713    6554 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:18.229853    6554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:18.229857    6554 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:18.229859    6554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:18.229981    6554 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:18.230196    6554 out.go:298] Setting JSON to false
	I0729 16:46:18.230204    6554 mustload.go:65] Loading cluster: no-preload-814000
	I0729 16:46:18.230382    6554 config.go:182] Loaded profile config "no-preload-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 16:46:18.234785    6554 out.go:177] * The control-plane node no-preload-814000 host is not running: state=Stopped
	I0729 16:46:18.243655    6554 out.go:177]   To start a cluster, run: "minikube start -p no-preload-814000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-814000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (29.278458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-814000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (28.862917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-770000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-770000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (11.451162084s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-770000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-770000" primary control-plane node in "default-k8s-diff-port-770000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:18.654927    6581 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:18.655085    6581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:18.655089    6581 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:18.655091    6581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:18.655233    6581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:18.656347    6581 out.go:298] Setting JSON to false
	I0729 16:46:18.672676    6581 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4545,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:46:18.672748    6581 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:46:18.677729    6581 out.go:177] * [default-k8s-diff-port-770000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:46:18.683618    6581 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:46:18.683656    6581 notify.go:220] Checking for updates...
	I0729 16:46:18.691699    6581 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:46:18.695721    6581 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:46:18.698714    6581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:46:18.701703    6581 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:46:18.704722    6581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:46:18.708069    6581 config.go:182] Loaded profile config "embed-certs-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:46:18.708133    6581 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:46:18.708189    6581 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:46:18.711705    6581 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:46:18.718738    6581 start.go:297] selected driver: qemu2
	I0729 16:46:18.718749    6581 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:46:18.718756    6581 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:46:18.721221    6581 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 16:46:18.724718    6581 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:46:18.728809    6581 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:46:18.728826    6581 cni.go:84] Creating CNI manager for ""
	I0729 16:46:18.728835    6581 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:18.728840    6581 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:46:18.728885    6581 start.go:340] cluster config:
	{Name:default-k8s-diff-port-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:18.732653    6581 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:18.740733    6581 out.go:177] * Starting "default-k8s-diff-port-770000" primary control-plane node in "default-k8s-diff-port-770000" cluster
	I0729 16:46:18.744701    6581 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:46:18.744720    6581 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:46:18.744733    6581 cache.go:56] Caching tarball of preloaded images
	I0729 16:46:18.744800    6581 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:46:18.744807    6581 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:46:18.744876    6581 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/default-k8s-diff-port-770000/config.json ...
	I0729 16:46:18.744894    6581 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/default-k8s-diff-port-770000/config.json: {Name:mka8913e0f0c5bbc46021fd706dcab1f7ba86b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:18.745121    6581 start.go:360] acquireMachinesLock for default-k8s-diff-port-770000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:20.254750    6581 start.go:364] duration metric: took 1.509644417s to acquireMachinesLock for "default-k8s-diff-port-770000"
	I0729 16:46:20.254954    6581 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:46:20.255210    6581 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:46:20.272768    6581 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:46:20.322147    6581 start.go:159] libmachine.API.Create for "default-k8s-diff-port-770000" (driver="qemu2")
	I0729 16:46:20.322195    6581 client.go:168] LocalClient.Create starting
	I0729 16:46:20.322319    6581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:46:20.322377    6581 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:20.322398    6581 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:20.322467    6581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:46:20.322510    6581 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:20.322522    6581 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:20.323129    6581 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:46:20.482635    6581 main.go:141] libmachine: Creating SSH key...
	I0729 16:46:20.544462    6581 main.go:141] libmachine: Creating Disk image...
	I0729 16:46:20.544477    6581 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:46:20.544680    6581 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2
	I0729 16:46:20.554640    6581 main.go:141] libmachine: STDOUT: 
	I0729 16:46:20.554662    6581 main.go:141] libmachine: STDERR: 
	I0729 16:46:20.554719    6581 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2 +20000M
	I0729 16:46:20.563680    6581 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:46:20.563699    6581 main.go:141] libmachine: STDERR: 
	I0729 16:46:20.563721    6581 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2
	I0729 16:46:20.563727    6581 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:46:20.563743    6581 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:20.563767    6581 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:26:34:92:12:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2
	I0729 16:46:20.565666    6581 main.go:141] libmachine: STDOUT: 
	I0729 16:46:20.565686    6581 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:20.565705    6581 client.go:171] duration metric: took 243.512833ms to LocalClient.Create
	I0729 16:46:22.567908    6581 start.go:128] duration metric: took 2.31270325s to createHost
	I0729 16:46:22.568020    6581 start.go:83] releasing machines lock for "default-k8s-diff-port-770000", held for 2.313271542s
	W0729 16:46:22.568060    6581 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:22.578094    6581 out.go:177] * Deleting "default-k8s-diff-port-770000" in qemu2 ...
	W0729 16:46:22.601777    6581 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:22.601832    6581 start.go:729] Will try again in 5 seconds ...
	I0729 16:46:27.603954    6581 start.go:360] acquireMachinesLock for default-k8s-diff-port-770000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:27.604335    6581 start.go:364] duration metric: took 297.875µs to acquireMachinesLock for "default-k8s-diff-port-770000"
	I0729 16:46:27.604468    6581 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:46:27.604818    6581 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:46:27.614347    6581 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:46:27.664474    6581 start.go:159] libmachine.API.Create for "default-k8s-diff-port-770000" (driver="qemu2")
	I0729 16:46:27.664524    6581 client.go:168] LocalClient.Create starting
	I0729 16:46:27.664644    6581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:46:27.664708    6581 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:27.664727    6581 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:27.664792    6581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:46:27.664839    6581 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:27.664850    6581 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:27.665469    6581 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:46:27.826437    6581 main.go:141] libmachine: Creating SSH key...
	I0729 16:46:27.997954    6581 main.go:141] libmachine: Creating Disk image...
	I0729 16:46:27.997961    6581 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:46:27.998184    6581 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2
	I0729 16:46:28.007674    6581 main.go:141] libmachine: STDOUT: 
	I0729 16:46:28.007690    6581 main.go:141] libmachine: STDERR: 
	I0729 16:46:28.007751    6581 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2 +20000M
	I0729 16:46:28.015576    6581 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:46:28.015593    6581 main.go:141] libmachine: STDERR: 
	I0729 16:46:28.015604    6581 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2
	I0729 16:46:28.015613    6581 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:46:28.015620    6581 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:28.015651    6581 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:fa:89:83:6e:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2
	I0729 16:46:28.017282    6581 main.go:141] libmachine: STDOUT: 
	I0729 16:46:28.017302    6581 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:28.017314    6581 client.go:171] duration metric: took 352.796416ms to LocalClient.Create
	I0729 16:46:30.019575    6581 start.go:128] duration metric: took 2.414771042s to createHost
	I0729 16:46:30.019658    6581 start.go:83] releasing machines lock for "default-k8s-diff-port-770000", held for 2.415374708s
	W0729 16:46:30.020058    6581 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:30.030928    6581 out.go:177] 
	W0729 16:46:30.042938    6581 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:30.042973    6581 out.go:239] * 
	* 
	W0729 16:46:30.045493    6581 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:46:30.055787    6581 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-770000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (63.643833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-770000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-449000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-449000 create -f testdata/busybox.yaml: exit status 1 (31.563459ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-449000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-449000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (33.664083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-449000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (34.200084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-449000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-449000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-449000 describe deploy/metrics-server -n kube-system: exit status 1 (27.561167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-449000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-449000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (28.778583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-449000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-449000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.188724125s)

                                                
                                                
-- stdout --
	* [embed-certs-449000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-449000" primary control-plane node in "embed-certs-449000" cluster
	* Restarting existing qemu2 VM for "embed-certs-449000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-449000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:23.931067    6625 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:23.931206    6625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:23.931210    6625 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:23.931212    6625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:23.931345    6625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:23.932361    6625 out.go:298] Setting JSON to false
	I0729 16:46:23.948570    6625 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4550,"bootTime":1722292233,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:46:23.948671    6625 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:46:23.953291    6625 out.go:177] * [embed-certs-449000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:46:23.961291    6625 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:46:23.961346    6625 notify.go:220] Checking for updates...
	I0729 16:46:23.969165    6625 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:46:23.980963    6625 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:46:23.984318    6625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:46:23.985458    6625 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:46:23.988314    6625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:46:23.991634    6625 config.go:182] Loaded profile config "embed-certs-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:46:23.991897    6625 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:46:23.996233    6625 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:46:24.003298    6625 start.go:297] selected driver: qemu2
	I0729 16:46:24.003307    6625 start.go:901] validating driver "qemu2" against &{Name:embed-certs-449000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:24.003378    6625 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:46:24.005738    6625 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:46:24.005761    6625 cni.go:84] Creating CNI manager for ""
	I0729 16:46:24.005769    6625 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:24.005804    6625 start.go:340] cluster config:
	{Name:embed-certs-449000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-449000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:24.009303    6625 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:24.017267    6625 out.go:177] * Starting "embed-certs-449000" primary control-plane node in "embed-certs-449000" cluster
	I0729 16:46:24.021371    6625 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:46:24.021389    6625 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:46:24.021401    6625 cache.go:56] Caching tarball of preloaded images
	I0729 16:46:24.021488    6625 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:46:24.021494    6625 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:46:24.021554    6625 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/embed-certs-449000/config.json ...
	I0729 16:46:24.022089    6625 start.go:360] acquireMachinesLock for embed-certs-449000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:24.022133    6625 start.go:364] duration metric: took 37.458µs to acquireMachinesLock for "embed-certs-449000"
	I0729 16:46:24.022147    6625 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:46:24.022154    6625 fix.go:54] fixHost starting: 
	I0729 16:46:24.022270    6625 fix.go:112] recreateIfNeeded on embed-certs-449000: state=Stopped err=<nil>
	W0729 16:46:24.022279    6625 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:46:24.030304    6625 out.go:177] * Restarting existing qemu2 VM for "embed-certs-449000" ...
	I0729 16:46:24.034303    6625 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:24.034350    6625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d4:f6:50:ad:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2
	I0729 16:46:24.036546    6625 main.go:141] libmachine: STDOUT: 
	I0729 16:46:24.036569    6625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:24.036600    6625 fix.go:56] duration metric: took 14.446459ms for fixHost
	I0729 16:46:24.036605    6625 start.go:83] releasing machines lock for "embed-certs-449000", held for 14.46675ms
	W0729 16:46:24.036612    6625 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:24.036655    6625 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:24.036660    6625 start.go:729] Will try again in 5 seconds ...
	I0729 16:46:29.038742    6625 start.go:360] acquireMachinesLock for embed-certs-449000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:30.019841    6625 start.go:364] duration metric: took 980.964708ms to acquireMachinesLock for "embed-certs-449000"
	I0729 16:46:30.019994    6625 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:46:30.020012    6625 fix.go:54] fixHost starting: 
	I0729 16:46:30.020727    6625 fix.go:112] recreateIfNeeded on embed-certs-449000: state=Stopped err=<nil>
	W0729 16:46:30.020758    6625 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:46:30.038765    6625 out.go:177] * Restarting existing qemu2 VM for "embed-certs-449000" ...
	I0729 16:46:30.045854    6625 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:30.046053    6625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d4:f6:50:ad:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/embed-certs-449000/disk.qcow2
	I0729 16:46:30.054917    6625 main.go:141] libmachine: STDOUT: 
	I0729 16:46:30.054987    6625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:30.055066    6625 fix.go:56] duration metric: took 35.055208ms for fixHost
	I0729 16:46:30.055083    6625 start.go:83] releasing machines lock for "embed-certs-449000", held for 35.210084ms
	W0729 16:46:30.055252    6625 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-449000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-449000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:30.065849    6625 out.go:177] 
	W0729 16:46:30.069922    6625 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:30.070057    6625 out.go:239] * 
	* 
	W0729 16:46:30.072506    6625 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:46:30.081774    6625 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-449000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (56.047541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-770000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-770000 create -f testdata/busybox.yaml: exit status 1 (31.697208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-770000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-770000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (31.121709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-770000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (32.308417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-770000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-449000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (33.5545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-449000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-449000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-449000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.9515ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-449000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-449000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (31.208459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-770000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-770000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-770000 describe deploy/metrics-server -n kube-system: exit status 1 (28.958667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-770000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-770000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (34.233625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-770000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-449000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (29.711667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-449000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-449000 --alsologtostderr -v=1: exit status 83 (46.714458ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-449000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-449000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:30.352286    6658 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:30.352452    6658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:30.352455    6658 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:30.352457    6658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:30.352583    6658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:30.352800    6658 out.go:298] Setting JSON to false
	I0729 16:46:30.352808    6658 mustload.go:65] Loading cluster: embed-certs-449000
	I0729 16:46:30.353007    6658 config.go:182] Loaded profile config "embed-certs-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:46:30.357066    6658 out.go:177] * The control-plane node embed-certs-449000 host is not running: state=Stopped
	I0729 16:46:30.363934    6658 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-449000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-449000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (32.396791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-449000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (27.9845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-028000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-028000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.863863667s)

                                                
                                                
-- stdout --
	* [newest-cni-028000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-028000" primary control-plane node in "newest-cni-028000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-028000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:30.668181    6681 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:30.668311    6681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:30.668314    6681 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:30.668316    6681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:30.668446    6681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:30.669480    6681 out.go:298] Setting JSON to false
	I0729 16:46:30.685368    6681 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4557,"bootTime":1722292233,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:46:30.685440    6681 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:46:30.689029    6681 out.go:177] * [newest-cni-028000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:46:30.696041    6681 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:46:30.696096    6681 notify.go:220] Checking for updates...
	I0729 16:46:30.705949    6681 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:46:30.709058    6681 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:46:30.711899    6681 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:46:30.715001    6681 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:46:30.717991    6681 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:46:30.721322    6681 config.go:182] Loaded profile config "default-k8s-diff-port-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:46:30.721383    6681 config.go:182] Loaded profile config "multinode-971000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:46:30.721438    6681 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:46:30.726012    6681 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 16:46:30.731962    6681 start.go:297] selected driver: qemu2
	I0729 16:46:30.731970    6681 start.go:901] validating driver "qemu2" against <nil>
	I0729 16:46:30.731976    6681 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:46:30.734303    6681 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 16:46:30.734326    6681 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 16:46:30.741964    6681 out.go:177] * Automatically selected the socket_vmnet network
	I0729 16:46:30.744968    6681 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 16:46:30.745002    6681 cni.go:84] Creating CNI manager for ""
	I0729 16:46:30.745010    6681 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:30.745014    6681 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 16:46:30.745041    6681 start.go:340] cluster config:
	{Name:newest-cni-028000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:30.748883    6681 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:30.756991    6681 out.go:177] * Starting "newest-cni-028000" primary control-plane node in "newest-cni-028000" cluster
	I0729 16:46:30.760929    6681 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:46:30.760949    6681 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 16:46:30.760961    6681 cache.go:56] Caching tarball of preloaded images
	I0729 16:46:30.761024    6681 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:46:30.761030    6681 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 16:46:30.761106    6681 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/newest-cni-028000/config.json ...
	I0729 16:46:30.761123    6681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/newest-cni-028000/config.json: {Name:mkc8fee6a7b254f6ab2f7bdeb5fdf8ed6f6a96dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 16:46:30.761350    6681 start.go:360] acquireMachinesLock for newest-cni-028000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:30.761386    6681 start.go:364] duration metric: took 29.208µs to acquireMachinesLock for "newest-cni-028000"
	I0729 16:46:30.761399    6681 start.go:93] Provisioning new machine with config: &{Name:newest-cni-028000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:46:30.761429    6681 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:46:30.769939    6681 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:46:30.788402    6681 start.go:159] libmachine.API.Create for "newest-cni-028000" (driver="qemu2")
	I0729 16:46:30.788428    6681 client.go:168] LocalClient.Create starting
	I0729 16:46:30.788490    6681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:46:30.788521    6681 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:30.788532    6681 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:30.788569    6681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:46:30.788599    6681 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:30.788606    6681 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:30.789046    6681 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:46:30.936525    6681 main.go:141] libmachine: Creating SSH key...
	I0729 16:46:30.990296    6681 main.go:141] libmachine: Creating Disk image...
	I0729 16:46:30.990302    6681 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:46:30.990469    6681 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2
	I0729 16:46:30.999596    6681 main.go:141] libmachine: STDOUT: 
	I0729 16:46:30.999611    6681 main.go:141] libmachine: STDERR: 
	I0729 16:46:30.999669    6681 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2 +20000M
	I0729 16:46:31.007542    6681 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:46:31.007554    6681 main.go:141] libmachine: STDERR: 
	I0729 16:46:31.007567    6681 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2
	I0729 16:46:31.007576    6681 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:46:31.007591    6681 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:31.007616    6681 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:42:87:f6:22:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2
	I0729 16:46:31.009261    6681 main.go:141] libmachine: STDOUT: 
	I0729 16:46:31.009275    6681 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:31.009296    6681 client.go:171] duration metric: took 220.869083ms to LocalClient.Create
	I0729 16:46:33.011426    6681 start.go:128] duration metric: took 2.250042458s to createHost
	I0729 16:46:33.011524    6681 start.go:83] releasing machines lock for "newest-cni-028000", held for 2.250175042s
	W0729 16:46:33.011589    6681 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:33.018819    6681 out.go:177] * Deleting "newest-cni-028000" in qemu2 ...
	W0729 16:46:33.049240    6681 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:33.049264    6681 start.go:729] Will try again in 5 seconds ...
	I0729 16:46:38.051404    6681 start.go:360] acquireMachinesLock for newest-cni-028000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:38.051890    6681 start.go:364] duration metric: took 371.375µs to acquireMachinesLock for "newest-cni-028000"
	I0729 16:46:38.052024    6681 start.go:93] Provisioning new machine with config: &{Name:newest-cni-028000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 16:46:38.052321    6681 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 16:46:38.061986    6681 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 16:46:38.113905    6681 start.go:159] libmachine.API.Create for "newest-cni-028000" (driver="qemu2")
	I0729 16:46:38.113967    6681 client.go:168] LocalClient.Create starting
	I0729 16:46:38.114094    6681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/ca.pem
	I0729 16:46:38.114159    6681 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:38.114176    6681 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:38.114233    6681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19348-1218/.minikube/certs/cert.pem
	I0729 16:46:38.114278    6681 main.go:141] libmachine: Decoding PEM data...
	I0729 16:46:38.114295    6681 main.go:141] libmachine: Parsing certificate...
	I0729 16:46:38.114833    6681 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 16:46:38.277447    6681 main.go:141] libmachine: Creating SSH key...
	I0729 16:46:38.424711    6681 main.go:141] libmachine: Creating Disk image...
	I0729 16:46:38.424717    6681 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 16:46:38.424911    6681 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2.raw /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2
	I0729 16:46:38.434431    6681 main.go:141] libmachine: STDOUT: 
	I0729 16:46:38.434488    6681 main.go:141] libmachine: STDERR: 
	I0729 16:46:38.434539    6681 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2 +20000M
	I0729 16:46:38.442352    6681 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 16:46:38.442398    6681 main.go:141] libmachine: STDERR: 
	I0729 16:46:38.442414    6681 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2
	I0729 16:46:38.442421    6681 main.go:141] libmachine: Starting QEMU VM...
	I0729 16:46:38.442433    6681 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:38.442462    6681 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:27:ce:01:36:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2
	I0729 16:46:38.444100    6681 main.go:141] libmachine: STDOUT: 
	I0729 16:46:38.444117    6681 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:38.444130    6681 client.go:171] duration metric: took 330.166208ms to LocalClient.Create
	I0729 16:46:40.446250    6681 start.go:128] duration metric: took 2.393961458s to createHost
	I0729 16:46:40.446389    6681 start.go:83] releasing machines lock for "newest-cni-028000", held for 2.394511375s
	W0729 16:46:40.446779    6681 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-028000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-028000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:40.460351    6681 out.go:177] 
	W0729 16:46:40.464432    6681 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:40.464467    6681 out.go:239] * 
	* 
	W0729 16:46:40.467129    6681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:46:40.480316    6681 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-028000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000: exit status 7 (64.507125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-028000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-770000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-770000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.496083833s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-770000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-770000" primary control-plane node in "default-k8s-diff-port-770000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-770000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-770000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:34.052285    6709 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:34.052654    6709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:34.052660    6709 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:34.052663    6709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:34.052865    6709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:34.054216    6709 out.go:298] Setting JSON to false
	I0729 16:46:34.070555    6709 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4561,"bootTime":1722292233,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:46:34.070626    6709 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:46:34.075595    6709 out.go:177] * [default-k8s-diff-port-770000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:46:34.082636    6709 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:46:34.082688    6709 notify.go:220] Checking for updates...
	I0729 16:46:34.089546    6709 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:46:34.091083    6709 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:46:34.094677    6709 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:46:34.097612    6709 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:46:34.100641    6709 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:46:34.103970    6709 config.go:182] Loaded profile config "default-k8s-diff-port-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:46:34.104257    6709 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:46:34.107598    6709 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:46:34.114577    6709 start.go:297] selected driver: qemu2
	I0729 16:46:34.114582    6709 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:34.114637    6709 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:46:34.116962    6709 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 16:46:34.117011    6709 cni.go:84] Creating CNI manager for ""
	I0729 16:46:34.117018    6709 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:34.117053    6709 start.go:340] cluster config:
	{Name:default-k8s-diff-port-770000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-770000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:34.120708    6709 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:34.129551    6709 out.go:177] * Starting "default-k8s-diff-port-770000" primary control-plane node in "default-k8s-diff-port-770000" cluster
	I0729 16:46:34.133596    6709 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 16:46:34.133615    6709 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 16:46:34.133625    6709 cache.go:56] Caching tarball of preloaded images
	I0729 16:46:34.133691    6709 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:46:34.133696    6709 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 16:46:34.133753    6709 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/default-k8s-diff-port-770000/config.json ...
	I0729 16:46:34.134250    6709 start.go:360] acquireMachinesLock for default-k8s-diff-port-770000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:34.134283    6709 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "default-k8s-diff-port-770000"
	I0729 16:46:34.134293    6709 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:46:34.134298    6709 fix.go:54] fixHost starting: 
	I0729 16:46:34.134416    6709 fix.go:112] recreateIfNeeded on default-k8s-diff-port-770000: state=Stopped err=<nil>
	W0729 16:46:34.134424    6709 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:46:34.138559    6709 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-770000" ...
	I0729 16:46:34.146552    6709 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:34.146585    6709 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:fa:89:83:6e:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2
	I0729 16:46:34.148614    6709 main.go:141] libmachine: STDOUT: 
	I0729 16:46:34.148631    6709 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:34.148665    6709 fix.go:56] duration metric: took 14.367291ms for fixHost
	I0729 16:46:34.148669    6709 start.go:83] releasing machines lock for "default-k8s-diff-port-770000", held for 14.382625ms
	W0729 16:46:34.148675    6709 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:34.148708    6709 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:34.148713    6709 start.go:729] Will try again in 5 seconds ...
	I0729 16:46:39.150765    6709 start.go:360] acquireMachinesLock for default-k8s-diff-port-770000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:40.446576    6709 start.go:364] duration metric: took 1.29568575s to acquireMachinesLock for "default-k8s-diff-port-770000"
	I0729 16:46:40.446804    6709 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:46:40.446820    6709 fix.go:54] fixHost starting: 
	I0729 16:46:40.447647    6709 fix.go:112] recreateIfNeeded on default-k8s-diff-port-770000: state=Stopped err=<nil>
	W0729 16:46:40.447679    6709 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:46:40.460352    6709 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-770000" ...
	I0729 16:46:40.464394    6709 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:40.464642    6709 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:fa:89:83:6e:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/default-k8s-diff-port-770000/disk.qcow2
	I0729 16:46:40.474058    6709 main.go:141] libmachine: STDOUT: 
	I0729 16:46:40.474116    6709 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:40.474200    6709 fix.go:56] duration metric: took 27.382125ms for fixHost
	I0729 16:46:40.474216    6709 start.go:83] releasing machines lock for "default-k8s-diff-port-770000", held for 27.576458ms
	W0729 16:46:40.474421    6709 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-770000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-770000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:40.488292    6709 out.go:177] 
	W0729 16:46:40.495350    6709 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:40.495381    6709 out.go:239] * 
	* 
	W0729 16:46:40.498185    6709 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:46:40.509481    6709 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-770000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (56.027791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-770000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-770000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (34.651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-770000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-770000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-770000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-770000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.32425ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-770000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-770000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (33.651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-770000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-770000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (28.044959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-770000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-770000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-770000 --alsologtostderr -v=1: exit status 83 (39.294167ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-770000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-770000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:40.765291    6740 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:40.765434    6740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:40.765438    6740 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:40.765440    6740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:40.765550    6740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:40.765768    6740 out.go:298] Setting JSON to false
	I0729 16:46:40.765775    6740 mustload.go:65] Loading cluster: default-k8s-diff-port-770000
	I0729 16:46:40.765970    6740 config.go:182] Loaded profile config "default-k8s-diff-port-770000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 16:46:40.769419    6740 out.go:177] * The control-plane node default-k8s-diff-port-770000 host is not running: state=Stopped
	I0729 16:46:40.773330    6740 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-770000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-770000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (28.613583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-770000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (29.005458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-770000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-028000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-028000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.180937s)

                                                
                                                
-- stdout --
	* [newest-cni-028000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-028000" primary control-plane node in "newest-cni-028000" cluster
	* Restarting existing qemu2 VM for "newest-cni-028000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-028000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:44.504664    6780 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:44.504775    6780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:44.504779    6780 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:44.504781    6780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:44.504905    6780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:44.505921    6780 out.go:298] Setting JSON to false
	I0729 16:46:44.521889    6780 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4571,"bootTime":1722292233,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 16:46:44.521952    6780 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 16:46:44.527135    6780 out.go:177] * [newest-cni-028000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 16:46:44.533085    6780 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 16:46:44.533115    6780 notify.go:220] Checking for updates...
	I0729 16:46:44.540138    6780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 16:46:44.543156    6780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 16:46:44.546149    6780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 16:46:44.549187    6780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 16:46:44.552164    6780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 16:46:44.555460    6780 config.go:182] Loaded profile config "newest-cni-028000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 16:46:44.555705    6780 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 16:46:44.559131    6780 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 16:46:44.566144    6780 start.go:297] selected driver: qemu2
	I0729 16:46:44.566152    6780 start.go:901] validating driver "qemu2" against &{Name:newest-cni-028000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:44.566214    6780 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 16:46:44.568449    6780 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 16:46:44.568471    6780 cni.go:84] Creating CNI manager for ""
	I0729 16:46:44.568479    6780 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 16:46:44.568499    6780 start.go:340] cluster config:
	{Name:newest-cni-028000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-028000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 16:46:44.571942    6780 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 16:46:44.579056    6780 out.go:177] * Starting "newest-cni-028000" primary control-plane node in "newest-cni-028000" cluster
	I0729 16:46:44.583186    6780 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 16:46:44.583205    6780 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 16:46:44.583219    6780 cache.go:56] Caching tarball of preloaded images
	I0729 16:46:44.583288    6780 preload.go:172] Found /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 16:46:44.583294    6780 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 16:46:44.583357    6780 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/newest-cni-028000/config.json ...
	I0729 16:46:44.583838    6780 start.go:360] acquireMachinesLock for newest-cni-028000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:44.583879    6780 start.go:364] duration metric: took 29µs to acquireMachinesLock for "newest-cni-028000"
	I0729 16:46:44.583890    6780 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:46:44.583897    6780 fix.go:54] fixHost starting: 
	I0729 16:46:44.584018    6780 fix.go:112] recreateIfNeeded on newest-cni-028000: state=Stopped err=<nil>
	W0729 16:46:44.584027    6780 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:46:44.588095    6780 out.go:177] * Restarting existing qemu2 VM for "newest-cni-028000" ...
	I0729 16:46:44.595940    6780 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:44.595981    6780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:27:ce:01:36:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2
	I0729 16:46:44.598024    6780 main.go:141] libmachine: STDOUT: 
	I0729 16:46:44.598045    6780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:44.598076    6780 fix.go:56] duration metric: took 14.179625ms for fixHost
	I0729 16:46:44.598082    6780 start.go:83] releasing machines lock for "newest-cni-028000", held for 14.198291ms
	W0729 16:46:44.598088    6780 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:44.598134    6780 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:44.598139    6780 start.go:729] Will try again in 5 seconds ...
	I0729 16:46:49.600172    6780 start.go:360] acquireMachinesLock for newest-cni-028000: {Name:mk62e4cb8b2a9b39cc10cfbbbe6f504a0d08882a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 16:46:49.600532    6780 start.go:364] duration metric: took 292.542µs to acquireMachinesLock for "newest-cni-028000"
	I0729 16:46:49.600656    6780 start.go:96] Skipping create...Using existing machine configuration
	I0729 16:46:49.600672    6780 fix.go:54] fixHost starting: 
	I0729 16:46:49.601360    6780 fix.go:112] recreateIfNeeded on newest-cni-028000: state=Stopped err=<nil>
	W0729 16:46:49.601386    6780 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 16:46:49.610664    6780 out.go:177] * Restarting existing qemu2 VM for "newest-cni-028000" ...
	I0729 16:46:49.614784    6780 qemu.go:418] Using hvf for hardware acceleration
	I0729 16:46:49.615000    6780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:27:ce:01:36:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19348-1218/.minikube/machines/newest-cni-028000/disk.qcow2
	I0729 16:46:49.623866    6780 main.go:141] libmachine: STDOUT: 
	I0729 16:46:49.623931    6780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 16:46:49.623993    6780 fix.go:56] duration metric: took 23.323292ms for fixHost
	I0729 16:46:49.624009    6780 start.go:83] releasing machines lock for "newest-cni-028000", held for 23.453833ms
	W0729 16:46:49.624163    6780 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-028000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-028000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 16:46:49.631732    6780 out.go:177] 
	W0729 16:46:49.635919    6780 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 16:46:49.635942    6780 out.go:239] * 
	* 
	W0729 16:46:49.638477    6780 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 16:46:49.645558    6780 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-028000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000: exit status 7 (67.8715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-028000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-028000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000: exit status 7 (29.606084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-028000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-028000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-028000 --alsologtostderr -v=1: exit status 83 (39.951333ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-028000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-028000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 16:46:49.825518    6794 out.go:291] Setting OutFile to fd 1 ...
	I0729 16:46:49.825677    6794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:49.825680    6794 out.go:304] Setting ErrFile to fd 2...
	I0729 16:46:49.825682    6794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 16:46:49.825810    6794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 16:46:49.826041    6794 out.go:298] Setting JSON to false
	I0729 16:46:49.826047    6794 mustload.go:65] Loading cluster: newest-cni-028000
	I0729 16:46:49.826252    6794 config.go:182] Loaded profile config "newest-cni-028000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 16:46:49.829335    6794 out.go:177] * The control-plane node newest-cni-028000 host is not running: state=Stopped
	I0729 16:46:49.833057    6794 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-028000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-028000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000: exit status 7 (28.453333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-028000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000: exit status 7 (28.767041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-028000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (161/278)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 12.1
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 13.52
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 136.08
38 TestAddons/serial/Volcano 37.02
40 TestAddons/serial/GCPAuth/Namespaces 0.08
42 TestAddons/parallel/Registry 13.73
43 TestAddons/parallel/Ingress 18.13
44 TestAddons/parallel/InspektorGadget 10.22
45 TestAddons/parallel/MetricsServer 5.25
48 TestAddons/parallel/CSI 51.35
49 TestAddons/parallel/Headlamp 17.57
50 TestAddons/parallel/CloudSpanner 5.09
51 TestAddons/parallel/LocalPath 40.79
52 TestAddons/parallel/NvidiaDevicePlugin 5.15
53 TestAddons/parallel/Yakd 10.24
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.72
65 TestErrorSpam/setup 35.52
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.25
68 TestErrorSpam/pause 0.64
69 TestErrorSpam/unpause 0.61
70 TestErrorSpam/stop 64.3
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 51.04
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.77
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.56
82 TestFunctional/serial/CacheCmd/cache/add_local 1.11
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 37.74
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.65
93 TestFunctional/serial/LogsFileCmd 0.62
94 TestFunctional/serial/InvalidService 3.84
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 8.33
98 TestFunctional/parallel/DryRun 0.23
99 TestFunctional/parallel/InternationalLanguage 0.12
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 25.47
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.41
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.38
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.13
120 TestFunctional/parallel/License 0.31
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.16
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.71
128 TestFunctional/parallel/ImageCommands/Setup 1.81
129 TestFunctional/parallel/DockerEnv/bash 0.28
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
133 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.46
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
146 TestFunctional/parallel/ServiceCmd/List 0.08
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
149 TestFunctional/parallel/ServiceCmd/Format 0.1
150 TestFunctional/parallel/ServiceCmd/URL 0.1
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
160 TestFunctional/parallel/MountCmd/any-port 5.97
161 TestFunctional/parallel/MountCmd/specific-port 0.97
162 TestFunctional/parallel/MountCmd/VerifyCleanup 0.61
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 201.17
170 TestMultiControlPlane/serial/DeployApp 4.76
171 TestMultiControlPlane/serial/PingHostFromPods 0.75
172 TestMultiControlPlane/serial/AddWorkerNode 55.77
173 TestMultiControlPlane/serial/NodeLabels 0.12
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.26
175 TestMultiControlPlane/serial/CopyFile 4.34
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.1
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 3.34
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.2
217 TestMainNoArgs 0.03
264 TestStoppedBinaryUpgrade/Setup 1.4
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
281 TestNoKubernetes/serial/ProfileList 31.27
282 TestNoKubernetes/serial/Stop 3.55
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
299 TestStartStop/group/old-k8s-version/serial/Stop 1.82
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
310 TestStartStop/group/no-preload/serial/Stop 3.01
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
323 TestStartStop/group/embed-certs/serial/Stop 3.2
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.54
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
341 TestStartStop/group/newest-cni/serial/Stop 3.72
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-541000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-541000: exit status 85 (94.244541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-541000 | jenkins | v1.33.1 | 29 Jul 24 15:46 PDT |          |
	|         | -p download-only-541000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 15:46:28
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 15:46:28.636182    1716 out.go:291] Setting OutFile to fd 1 ...
	I0729 15:46:28.636409    1716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:46:28.636412    1716 out.go:304] Setting ErrFile to fd 2...
	I0729 15:46:28.636415    1716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:46:28.636539    1716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	W0729 15:46:28.636627    1716 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19348-1218/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19348-1218/.minikube/config/config.json: no such file or directory
	I0729 15:46:28.637918    1716 out.go:298] Setting JSON to true
	I0729 15:46:28.655055    1716 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":955,"bootTime":1722292233,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 15:46:28.655119    1716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 15:46:28.660827    1716 out.go:97] [download-only-541000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 15:46:28.661017    1716 notify.go:220] Checking for updates...
	W0729 15:46:28.661085    1716 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 15:46:28.663779    1716 out.go:169] MINIKUBE_LOCATION=19348
	I0729 15:46:28.666905    1716 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 15:46:28.671786    1716 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 15:46:28.675818    1716 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 15:46:28.678854    1716 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	W0729 15:46:28.684882    1716 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 15:46:28.685129    1716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 15:46:28.689812    1716 out.go:97] Using the qemu2 driver based on user configuration
	I0729 15:46:28.689832    1716 start.go:297] selected driver: qemu2
	I0729 15:46:28.689856    1716 start.go:901] validating driver "qemu2" against <nil>
	I0729 15:46:28.689927    1716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 15:46:28.692814    1716 out.go:169] Automatically selected the socket_vmnet network
	I0729 15:46:28.698572    1716 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 15:46:28.698660    1716 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 15:46:28.698684    1716 cni.go:84] Creating CNI manager for ""
	I0729 15:46:28.698702    1716 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 15:46:28.698747    1716 start.go:340] cluster config:
	{Name:download-only-541000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-541000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 15:46:28.704075    1716 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 15:46:28.708864    1716 out.go:97] Downloading VM boot image ...
	I0729 15:46:28.708884    1716 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 15:46:42.637788    1716 out.go:97] Starting "download-only-541000" primary control-plane node in "download-only-541000" cluster
	I0729 15:46:42.637814    1716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 15:46:42.694845    1716 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 15:46:42.694851    1716 cache.go:56] Caching tarball of preloaded images
	I0729 15:46:42.695016    1716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 15:46:42.703091    1716 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 15:46:42.703098    1716 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:46:42.784176    1716 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 15:46:55.069837    1716 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:46:55.070013    1716 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:46:55.765105    1716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 15:46:55.765327    1716 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/download-only-541000/config.json ...
	I0729 15:46:55.765344    1716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/download-only-541000/config.json: {Name:mk2ee03f076dca51dba3a4685e9347d82f2f98bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 15:46:55.765575    1716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 15:46:55.765775    1716 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 15:46:56.255250    1716 out.go:169] 
	W0729 15:46:56.260390    1716 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19348-1218/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60 0x1084e1a60] Decompressors:map[bz2:0x14000711cf0 gz:0x14000711cf8 tar:0x14000711ca0 tar.bz2:0x14000711cb0 tar.gz:0x14000711cc0 tar.xz:0x14000711cd0 tar.zst:0x14000711ce0 tbz2:0x14000711cb0 tgz:0x14000711cc0 txz:0x14000711cd0 tzst:0x14000711ce0 xz:0x14000711d00 zip:0x14000711d10 zst:0x14000711d08] Getters:map[file:0x140000c2d10 http:0x140000b4550 https:0x140000b45a0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 15:46:56.260416    1716 out_reason.go:110] 
	W0729 15:46:56.268314    1716 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 15:46:56.271234    1716 out.go:169] 
	
	
	* The control-plane node download-only-541000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-541000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-541000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-125000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-125000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (12.10373825s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-125000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-125000: exit status 85 (79.5155ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-541000 | jenkins | v1.33.1 | 29 Jul 24 15:46 PDT |                     |
	|         | -p download-only-541000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 15:46 PDT | 29 Jul 24 15:46 PDT |
	| delete  | -p download-only-541000        | download-only-541000 | jenkins | v1.33.1 | 29 Jul 24 15:46 PDT | 29 Jul 24 15:46 PDT |
	| start   | -o=json --download-only        | download-only-125000 | jenkins | v1.33.1 | 29 Jul 24 15:46 PDT |                     |
	|         | -p download-only-125000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 15:46:56
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 15:46:56.681603    1743 out.go:291] Setting OutFile to fd 1 ...
	I0729 15:46:56.681753    1743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:46:56.681756    1743 out.go:304] Setting ErrFile to fd 2...
	I0729 15:46:56.681758    1743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:46:56.681894    1743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 15:46:56.682907    1743 out.go:298] Setting JSON to true
	I0729 15:46:56.698813    1743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":983,"bootTime":1722292233,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 15:46:56.698876    1743 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 15:46:56.702499    1743 out.go:97] [download-only-125000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 15:46:56.702624    1743 notify.go:220] Checking for updates...
	I0729 15:46:56.706337    1743 out.go:169] MINIKUBE_LOCATION=19348
	I0729 15:46:56.709344    1743 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 15:46:56.713401    1743 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 15:46:56.716349    1743 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 15:46:56.719360    1743 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	W0729 15:46:56.725215    1743 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 15:46:56.725350    1743 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 15:46:56.728298    1743 out.go:97] Using the qemu2 driver based on user configuration
	I0729 15:46:56.728306    1743 start.go:297] selected driver: qemu2
	I0729 15:46:56.728308    1743 start.go:901] validating driver "qemu2" against <nil>
	I0729 15:46:56.728349    1743 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 15:46:56.731356    1743 out.go:169] Automatically selected the socket_vmnet network
	I0729 15:46:56.734810    1743 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 15:46:56.734903    1743 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 15:46:56.734962    1743 cni.go:84] Creating CNI manager for ""
	I0729 15:46:56.734970    1743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 15:46:56.734979    1743 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 15:46:56.735021    1743 start.go:340] cluster config:
	{Name:download-only-125000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-125000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 15:46:56.738472    1743 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 15:46:56.741351    1743 out.go:97] Starting "download-only-125000" primary control-plane node in "download-only-125000" cluster
	I0729 15:46:56.741358    1743 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 15:46:56.792548    1743 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 15:46:56.792566    1743 cache.go:56] Caching tarball of preloaded images
	I0729 15:46:56.792729    1743 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 15:46:56.799991    1743 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 15:46:56.799999    1743 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:46:56.878051    1743 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 15:47:03.512928    1743 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:47:03.513086    1743 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:47:04.055707    1743 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 15:47:04.055897    1743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/download-only-125000/config.json ...
	I0729 15:47:04.055913    1743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/download-only-125000/config.json: {Name:mk350d5aafcb8883f39cbfd390b22bbbc3bfc631 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 15:47:04.056139    1743 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 15:47:04.056259    1743 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-125000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-125000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-125000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (13.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-217000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-217000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (13.515847417s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (13.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-217000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-217000: exit status 85 (82.26575ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-541000 | jenkins | v1.33.1 | 29 Jul 24 15:46 PDT |                     |
	|         | -p download-only-541000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 15:46 PDT | 29 Jul 24 15:46 PDT |
	| delete  | -p download-only-541000             | download-only-541000 | jenkins | v1.33.1 | 29 Jul 24 15:46 PDT | 29 Jul 24 15:46 PDT |
	| start   | -o=json --download-only             | download-only-125000 | jenkins | v1.33.1 | 29 Jul 24 15:46 PDT |                     |
	|         | -p download-only-125000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 15:47 PDT | 29 Jul 24 15:47 PDT |
	| delete  | -p download-only-125000             | download-only-125000 | jenkins | v1.33.1 | 29 Jul 24 15:47 PDT | 29 Jul 24 15:47 PDT |
	| start   | -o=json --download-only             | download-only-217000 | jenkins | v1.33.1 | 29 Jul 24 15:47 PDT |                     |
	|         | -p download-only-217000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 15:47:09
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 15:47:09.076393    1767 out.go:291] Setting OutFile to fd 1 ...
	I0729 15:47:09.076557    1767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:47:09.076560    1767 out.go:304] Setting ErrFile to fd 2...
	I0729 15:47:09.076562    1767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:47:09.076697    1767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 15:47:09.077757    1767 out.go:298] Setting JSON to true
	I0729 15:47:09.093758    1767 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":996,"bootTime":1722292233,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 15:47:09.093815    1767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 15:47:09.097339    1767 out.go:97] [download-only-217000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 15:47:09.097465    1767 notify.go:220] Checking for updates...
	I0729 15:47:09.101311    1767 out.go:169] MINIKUBE_LOCATION=19348
	I0729 15:47:09.105334    1767 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 15:47:09.109318    1767 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 15:47:09.112316    1767 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 15:47:09.115272    1767 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	W0729 15:47:09.121293    1767 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 15:47:09.121463    1767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 15:47:09.124310    1767 out.go:97] Using the qemu2 driver based on user configuration
	I0729 15:47:09.124320    1767 start.go:297] selected driver: qemu2
	I0729 15:47:09.124325    1767 start.go:901] validating driver "qemu2" against <nil>
	I0729 15:47:09.124398    1767 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 15:47:09.127264    1767 out.go:169] Automatically selected the socket_vmnet network
	I0729 15:47:09.132548    1767 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 15:47:09.132678    1767 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 15:47:09.132699    1767 cni.go:84] Creating CNI manager for ""
	I0729 15:47:09.132707    1767 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 15:47:09.132718    1767 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 15:47:09.132758    1767 start.go:340] cluster config:
	{Name:download-only-217000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-217000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 15:47:09.136287    1767 iso.go:125] acquiring lock: {Name:mkf1d728b1fe142348c997391c276a84a6c54ad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 15:47:09.139267    1767 out.go:97] Starting "download-only-217000" primary control-plane node in "download-only-217000" cluster
	I0729 15:47:09.139274    1767 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 15:47:09.197218    1767 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 15:47:09.197233    1767 cache.go:56] Caching tarball of preloaded images
	I0729 15:47:09.197400    1767 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 15:47:09.201630    1767 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 15:47:09.201637    1767 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:47:09.291519    1767 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 15:47:17.611107    1767 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:47:17.611296    1767 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 15:47:18.130230    1767 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 15:47:18.130414    1767 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/download-only-217000/config.json ...
	I0729 15:47:18.130430    1767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/download-only-217000/config.json: {Name:mkd3779071e80796a6d6c0cb54f3d55dc550779e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 15:47:18.130657    1767 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 15:47:18.130778    1767 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19348-1218/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-217000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-217000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-217000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-123000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-123000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-123000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-353000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-353000: exit status 85 (56.214333ms)

                                                
                                                
-- stdout --
	* Profile "addons-353000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-353000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-353000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-353000: exit status 85 (53.427792ms)

                                                
                                                
-- stdout --
	* Profile "addons-353000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-353000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (136.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-353000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-353000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m16.076172084s)
--- PASS: TestAddons/Setup (136.08s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.02s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 6.353959ms
addons_test.go:905: volcano-admission stabilized in 6.54875ms
addons_test.go:897: volcano-scheduler stabilized in 6.569084ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-jc59m" [16bf747d-08f9-4fae-a7c3-9049a08155e8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003747167s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-ttpmp" [69d317c5-121c-4e25-8d9d-5611f90ab1a6] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004068709s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-cpfkd" [99075f79-2c5a-4adb-a4d6-fafb8215b81a] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003922875s
addons_test.go:932: (dbg) Run:  kubectl --context addons-353000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-353000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-353000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c8ee25bb-ab27-4558-a514-a8bcd40ad2ca] Pending
helpers_test.go:344: "test-job-nginx-0" [c8ee25bb-ab27-4558-a514-a8bcd40ad2ca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [c8ee25bb-ab27-4558-a514-a8bcd40ad2ca] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.00334s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-353000 addons disable volcano --alsologtostderr -v=1: (9.778063875s)
--- PASS: TestAddons/serial/Volcano (37.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-353000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-353000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.106042ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-v2sk6" [ff9030c9-642c-4296-8f3e-8700650c99ee] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004496792s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-562v2" [201e5f1b-7789-488e-84cf-18dfc5b06a22] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002559958s
addons_test.go:342: (dbg) Run:  kubectl --context addons-353000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-353000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-353000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.45598225s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 ip
2024/07/29 15:50:46 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-353000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-353000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-353000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [173258af-78e9-4ec5-ba9a-46254ec7cb20] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [173258af-78e9-4ec5-ba9a-46254ec7cb20] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002637042s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-353000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-353000 addons disable ingress --alsologtostderr -v=1: (7.205780708s)
--- PASS: TestAddons/parallel/Ingress (18.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-scg97" [a85bfc01-9bb6-4650-bf8b-3564e14eda59] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004539125s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-353000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-353000: (5.213008s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.415542ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-tm9gc" [4457bc79-6dbe-4ab4-86d0-ec07416d0423] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004101583s
addons_test.go:417: (dbg) Run:  kubectl --context addons-353000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.8135ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-353000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-353000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0b439820-0f59-4f35-9a4b-9eb68a108a82] Pending
helpers_test.go:344: "task-pv-pod" [0b439820-0f59-4f35-9a4b-9eb68a108a82] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0b439820-0f59-4f35-9a4b-9eb68a108a82] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003548959s
addons_test.go:590: (dbg) Run:  kubectl --context addons-353000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-353000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-353000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-353000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-353000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-353000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-353000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e340754c-a9f7-452f-b3c6-f5a7c83d2671] Pending
helpers_test.go:344: "task-pv-pod-restore" [e340754c-a9f7-452f-b3c6-f5a7c83d2671] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e340754c-a9f7-452f-b3c6-f5a7c83d2671] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003964291s
addons_test.go:632: (dbg) Run:  kubectl --context addons-353000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-353000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-353000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-353000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.082447s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-353000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-sl6dd" [e17dfd6e-cfab-4207-b5fa-1597d7a1f4b2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-sl6dd" [e17dfd6e-cfab-4207-b5fa-1597d7a1f4b2] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003346541s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-353000 addons disable headlamp --alsologtostderr -v=1: (5.224119458s)
--- PASS: TestAddons/parallel/Headlamp (17.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-ql72l" [9c486cb3-31ff-41e0-b557-e4914c63e14e] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002721666s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-353000
--- PASS: TestAddons/parallel/CloudSpanner (5.09s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-353000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-353000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [de00c764-377c-4406-be08-c435e76946d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [de00c764-377c-4406-be08-c435e76946d2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [de00c764-377c-4406-be08-c435e76946d2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00341875s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 ssh "cat /opt/local-path-provisioner/pvc-6ffc8c12-a070-4125-b1bb-c43ca062d418_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-353000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-353000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-353000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.335304375s)
--- PASS: TestAddons/parallel/LocalPath (40.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nw6r9" [7c69f934-c645-49ee-9e49-265be5ce1c98] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004570583s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-353000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-6mb4k" [998e08c3-db33-4a9e-9720-2bb7b510a91d] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00358375s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-353000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-353000 addons disable yakd --alsologtostderr -v=1: (5.235085583s)
--- PASS: TestAddons/parallel/Yakd (10.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-353000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-353000: (12.202617875s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-353000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-353000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-353000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.72s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.72s)

                                                
                                    
x
+
TestErrorSpam/setup (35.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-973000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-973000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 --driver=qemu2 : (35.519856291s)
--- PASS: TestErrorSpam/setup (35.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (64.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 stop: (12.200461459s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 stop: (26.059848584s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-973000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-973000 stop: (26.036554667s)
--- PASS: TestErrorSpam/stop (64.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19348-1218/.minikube/files/etc/test/nested/copy/1714/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0729 15:54:39.644419    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:39.651264    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:39.663306    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:39.685337    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:39.727407    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:39.809490    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:39.971543    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:40.293592    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:40.935756    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:42.217947    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:44.779214    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:54:49.901284    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-905000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (51.04412725s)
--- PASS: TestFunctional/serial/StartWithProxy (51.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --alsologtostderr -v=8
E0729 15:55:00.143646    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:55:20.624048    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-905000 --alsologtostderr -v=8: (35.772664792s)
functional_test.go:659: soft start took 35.773054542s for "functional-905000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-905000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-905000 cache add registry.k8s.io/pause:3.1: (1.000117959s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3496406511/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache add minikube-local-cache-test:functional-905000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache delete minikube-local-cache-test:functional-905000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-905000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (66.0525ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 kubectl -- --context functional-905000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-905000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 15:56:01.586053    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-905000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.743738708s)
functional_test.go:757: restart took 37.74385875s for "functional-905000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-905000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1995239807/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.84s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-905000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-905000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-905000: exit status 115 (97.931125ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30992 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-905000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 config get cpus: exit status 14 (29.630375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 config get cpus: exit status 14 (29.586459ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-905000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-905000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2668: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-905000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (119.912708ms)

                                                
                                                
-- stdout --
	* [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 15:57:03.379636    2651 out.go:291] Setting OutFile to fd 1 ...
	I0729 15:57:03.379786    2651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:57:03.379789    2651 out.go:304] Setting ErrFile to fd 2...
	I0729 15:57:03.379792    2651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:57:03.379920    2651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 15:57:03.381068    2651 out.go:298] Setting JSON to false
	I0729 15:57:03.399259    2651 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1590,"bootTime":1722292233,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 15:57:03.399333    2651 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 15:57:03.405187    2651 out.go:177] * [functional-905000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 15:57:03.413194    2651 notify.go:220] Checking for updates...
	I0729 15:57:03.416225    2651 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 15:57:03.419313    2651 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 15:57:03.423212    2651 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 15:57:03.427270    2651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 15:57:03.430243    2651 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 15:57:03.434235    2651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 15:57:03.437484    2651 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 15:57:03.437744    2651 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 15:57:03.442101    2651 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 15:57:03.449190    2651 start.go:297] selected driver: qemu2
	I0729 15:57:03.449199    2651 start.go:901] validating driver "qemu2" against &{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 15:57:03.449247    2651 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 15:57:03.455160    2651 out.go:177] 
	W0729 15:57:03.459143    2651 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 15:57:03.463249    2651 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-905000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-905000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.928125ms)

                                                
                                                
-- stdout --
	* [functional-905000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 15:57:03.602780    2662 out.go:291] Setting OutFile to fd 1 ...
	I0729 15:57:03.602920    2662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:57:03.602923    2662 out.go:304] Setting ErrFile to fd 2...
	I0729 15:57:03.602925    2662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 15:57:03.603061    2662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
	I0729 15:57:03.604448    2662 out.go:298] Setting JSON to false
	I0729 15:57:03.621473    2662 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1590,"bootTime":1722292233,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 15:57:03.621565    2662 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 15:57:03.623413    2662 out.go:177] * [functional-905000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0729 15:57:03.630277    2662 out.go:177]   - MINIKUBE_LOCATION=19348
	I0729 15:57:03.630333    2662 notify.go:220] Checking for updates...
	I0729 15:57:03.637225    2662 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	I0729 15:57:03.640201    2662 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 15:57:03.647177    2662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 15:57:03.655229    2662 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	I0729 15:57:03.659185    2662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 15:57:03.662450    2662 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 15:57:03.662695    2662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 15:57:03.667273    2662 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0729 15:57:03.674167    2662 start.go:297] selected driver: qemu2
	I0729 15:57:03.674172    2662 start.go:901] validating driver "qemu2" against &{Name:functional-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 15:57:03.674217    2662 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 15:57:03.679173    2662 out.go:177] 
	W0729 15:57:03.683167    2662 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 15:57:03.687254    2662 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f9d08e05-bde0-4b7f-9163-ae252781daef] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002646166s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-905000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-905000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-905000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-905000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9cc2b0b3-5d82-4154-b023-520379e17109] Pending
helpers_test.go:344: "sp-pod" [9cc2b0b3-5d82-4154-b023-520379e17109] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9cc2b0b3-5d82-4154-b023-520379e17109] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00419125s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-905000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-905000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-905000 delete -f testdata/storage-provisioner/pod.yaml: (1.066128917s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-905000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9ef66166-41ca-474d-934d-bc42624d6027] Pending
helpers_test.go:344: "sp-pod" [9ef66166-41ca-474d-934d-bc42624d6027] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9ef66166-41ca-474d-934d-bc42624d6027] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003795917s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-905000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cp functional-905000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2394949960/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh -n functional-905000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1714/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/test/nested/copy/1714/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1714.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/1714.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1714.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /usr/share/ca-certificates/1714.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17142.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/17142.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17142.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /usr/share/ca-certificates/17142.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-905000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo systemctl is-active crio": exit status 1 (130.719917ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-905000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-905000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-905000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-905000 image ls --format short --alsologtostderr:
I0729 15:57:11.280117    2694 out.go:291] Setting OutFile to fd 1 ...
I0729 15:57:11.280263    2694 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 15:57:11.280266    2694 out.go:304] Setting ErrFile to fd 2...
I0729 15:57:11.280269    2694 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 15:57:11.280401    2694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
I0729 15:57:11.280779    2694 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 15:57:11.280840    2694 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 15:57:11.281654    2694 ssh_runner.go:195] Run: systemctl --version
I0729 15:57:11.281663    2694 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/functional-905000/id_rsa Username:docker}
I0729 15:57:11.305149    2694 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-905000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/kicbase/echo-server               | functional-905000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/library/minikube-local-cache-test | functional-905000 | 3225ef563cfad | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-905000 image ls --format table --alsologtostderr:
I0729 15:57:11.485904    2700 out.go:291] Setting OutFile to fd 1 ...
I0729 15:57:11.486057    2700 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 15:57:11.486060    2700 out.go:304] Setting ErrFile to fd 2...
I0729 15:57:11.486063    2700 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 15:57:11.486192    2700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
I0729 15:57:11.486613    2700 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 15:57:11.486677    2700 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 15:57:11.487527    2700 ssh_runner.go:195] Run: systemctl --version
I0729 15:57:11.487538    2700 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/functional-905000/id_rsa Username:docker}
I0729 15:57:11.510925    2700 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-905000 image ls --format json --alsologtostderr:
[{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-905000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502
fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"3225ef563cfad4c786d93136c41f25ea2f8d312ff7a245e32fa5c7c98458383b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-905000"],"size":"30"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags
":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"s
ize":"57400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-905000 image ls --format json --alsologtostderr:
I0729 15:57:11.415353    2698 out.go:291] Setting OutFile to fd 1 ...
I0729 15:57:11.415502    2698 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 15:57:11.415508    2698 out.go:304] Setting ErrFile to fd 2...
I0729 15:57:11.415511    2698 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 15:57:11.415649    2698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
I0729 15:57:11.416046    2698 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 15:57:11.416108    2698 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 15:57:11.416945    2698 ssh_runner.go:195] Run: systemctl --version
I0729 15:57:11.416953    2698 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/functional-905000/id_rsa Username:docker}
I0729 15:57:11.440802    2698 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-905000 image ls --format yaml --alsologtostderr:
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3225ef563cfad4c786d93136c41f25ea2f8d312ff7a245e32fa5c7c98458383b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-905000
size: "30"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-905000
size: "4780000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-905000 image ls --format yaml --alsologtostderr:
I0729 15:57:11.347664    2696 out.go:291] Setting OutFile to fd 1 ...
I0729 15:57:11.347830    2696 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 15:57:11.347833    2696 out.go:304] Setting ErrFile to fd 2...
I0729 15:57:11.347836    2696 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 15:57:11.347994    2696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
I0729 15:57:11.348422    2696 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 15:57:11.348486    2696 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 15:57:11.349310    2696 ssh_runner.go:195] Run: systemctl --version
I0729 15:57:11.349322    2696 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/functional-905000/id_rsa Username:docker}
I0729 15:57:11.372990    2696 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh pgrep buildkitd: exit status 1 (58.606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image build -t localhost/my-image:functional-905000 testdata/build --alsologtostderr
2024/07/29 15:57:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-905000 image build -t localhost/my-image:functional-905000 testdata/build --alsologtostderr: (1.583746917s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-905000 image build -t localhost/my-image:functional-905000 testdata/build --alsologtostderr:
I0729 15:57:11.611895    2704 out.go:291] Setting OutFile to fd 1 ...
I0729 15:57:11.612235    2704 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 15:57:11.612241    2704 out.go:304] Setting ErrFile to fd 2...
I0729 15:57:11.612244    2704 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 15:57:11.612383    2704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19348-1218/.minikube/bin
I0729 15:57:11.612795    2704 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 15:57:11.613596    2704 config.go:182] Loaded profile config "functional-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 15:57:11.614431    2704 ssh_runner.go:195] Run: systemctl --version
I0729 15:57:11.614441    2704 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19348-1218/.minikube/machines/functional-905000/id_rsa Username:docker}
I0729 15:57:11.638383    2704 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3630256788.tar
I0729 15:57:11.638464    2704 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 15:57:11.641858    2704 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3630256788.tar
I0729 15:57:11.643238    2704 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3630256788.tar: stat -c "%s %y" /var/lib/minikube/build/build.3630256788.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3630256788.tar': No such file or directory
I0729 15:57:11.643253    2704 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3630256788.tar --> /var/lib/minikube/build/build.3630256788.tar (3072 bytes)
I0729 15:57:11.651735    2704 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3630256788
I0729 15:57:11.655649    2704 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3630256788 -xf /var/lib/minikube/build/build.3630256788.tar
I0729 15:57:11.659228    2704 docker.go:360] Building image: /var/lib/minikube/build/build.3630256788
I0729 15:57:11.659272    2704 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-905000 /var/lib/minikube/build/build.3630256788
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:03595c25f707262d7bab4f661d916627432092bc7c5388ae28cf8c8646f7f1c8 done
#8 naming to localhost/my-image:functional-905000 done
#8 DONE 0.0s
I0729 15:57:13.093802    2704 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-905000 /var/lib/minikube/build/build.3630256788: (1.43452575s)
I0729 15:57:13.093876    2704 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3630256788
I0729 15:57:13.097716    2704 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3630256788.tar
I0729 15:57:13.100884    2704 build_images.go:217] Built localhost/my-image:functional-905000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3630256788.tar
I0729 15:57:13.100898    2704 build_images.go:133] succeeded building to: functional-905000
I0729 15:57:13.100902    2704 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.792218s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-905000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-905000 docker-env) && out/minikube-darwin-arm64 status -p functional-905000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-905000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-905000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-905000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-kq27q" [68d6f59d-b414-4c34-8d94-10b93ac540b4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-kq27q" [68d6f59d-b414-4c34-8d94-10b93ac540b4] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.0041955s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image load --daemon docker.io/kicbase/echo-server:functional-905000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image load --daemon docker.io/kicbase/echo-server:functional-905000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-905000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image load --daemon docker.io/kicbase/echo-server:functional-905000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image save docker.io/kicbase/echo-server:functional-905000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image rm docker.io/kicbase/echo-server:functional-905000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-905000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 image save --daemon docker.io/kicbase/echo-server:functional-905000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-905000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2508: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-905000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5318e567-a47c-44c4-8f65-8b4930442c29] Pending
helpers_test.go:344: "nginx-svc" [5318e567-a47c-44c4-8f65-8b4930442c29] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5318e567-a47c-44c4-8f65-8b4930442c29] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00352275s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service list -o json
functional_test.go:1490: Took "83.887459ms" to run "out/minikube-darwin-arm64 -p functional-905000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30280
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30280
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-905000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.165.122 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-905000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "88.851791ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "34.633541ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "86.822208ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.604625ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3955146712/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722293815803207000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3955146712/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722293815803207000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3955146712/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722293815803207000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3955146712/001/test-1722293815803207000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.727292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (55.577958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 22:56 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 22:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 22:56 test-1722293815803207000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh cat /mount-9p/test-1722293815803207000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-905000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ffef8aab-e427-42bd-bbf4-5ed5f96ee354] Pending
helpers_test.go:344: "busybox-mount" [ffef8aab-e427-42bd-bbf4-5ed5f96ee354] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ffef8aab-e427-42bd-bbf4-5ed5f96ee354] Running
helpers_test.go:344: "busybox-mount" [ffef8aab-e427-42bd-bbf4-5ed5f96ee354] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ffef8aab-e427-42bd-bbf4-5ed5f96ee354] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00372875s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-905000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3955146712/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3208121530/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.548041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3208121530/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "sudo umount -f /mount-9p": exit status 1 (58.495417ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-905000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3208121530/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2103708035/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2103708035/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2103708035/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1: exit status 1 (65.833458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-905000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-905000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2103708035/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2103708035/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-905000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2103708035/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.61s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-905000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-905000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-905000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-365000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0729 15:57:23.507760    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 15:59:39.641860    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0729 16:00:07.348011    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/addons-353000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-365000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m20.970752584s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (201.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-365000 -- rollout status deployment/busybox: (3.22773475s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-9vm9r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-rsnqt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-swdlv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-9vm9r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-rsnqt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-swdlv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-9vm9r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-rsnqt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-swdlv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-9vm9r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-9vm9r -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-rsnqt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-rsnqt -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-swdlv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-365000 -- exec busybox-fc5497c4f-swdlv -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-365000 -v=7 --alsologtostderr
E0729 16:01:18.255621    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:18.261941    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:18.272487    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:18.294571    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:18.336625    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:18.417358    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:18.579144    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:18.901260    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:19.543378    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:20.823648    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:23.385849    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:01:28.508002    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-365000 -v=7 --alsologtostderr: (55.536503542s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-365000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp testdata/cp-test.txt ha-365000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1803800116/001/cp-test_ha-365000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000:/home/docker/cp-test.txt ha-365000-m02:/home/docker/cp-test_ha-365000_ha-365000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m02 "sudo cat /home/docker/cp-test_ha-365000_ha-365000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000:/home/docker/cp-test.txt ha-365000-m03:/home/docker/cp-test_ha-365000_ha-365000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m03 "sudo cat /home/docker/cp-test_ha-365000_ha-365000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000:/home/docker/cp-test.txt ha-365000-m04:/home/docker/cp-test_ha-365000_ha-365000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m04 "sudo cat /home/docker/cp-test_ha-365000_ha-365000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp testdata/cp-test.txt ha-365000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1803800116/001/cp-test_ha-365000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m02:/home/docker/cp-test.txt ha-365000:/home/docker/cp-test_ha-365000-m02_ha-365000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000 "sudo cat /home/docker/cp-test_ha-365000-m02_ha-365000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m02:/home/docker/cp-test.txt ha-365000-m03:/home/docker/cp-test_ha-365000-m02_ha-365000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m03 "sudo cat /home/docker/cp-test_ha-365000-m02_ha-365000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m02:/home/docker/cp-test.txt ha-365000-m04:/home/docker/cp-test_ha-365000-m02_ha-365000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m04 "sudo cat /home/docker/cp-test_ha-365000-m02_ha-365000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp testdata/cp-test.txt ha-365000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1803800116/001/cp-test_ha-365000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m03 "sudo cat /home/docker/cp-test.txt"
E0729 16:01:38.748886    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m03:/home/docker/cp-test.txt ha-365000:/home/docker/cp-test_ha-365000-m03_ha-365000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000 "sudo cat /home/docker/cp-test_ha-365000-m03_ha-365000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m03:/home/docker/cp-test.txt ha-365000-m02:/home/docker/cp-test_ha-365000-m03_ha-365000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m02 "sudo cat /home/docker/cp-test_ha-365000-m03_ha-365000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m03:/home/docker/cp-test.txt ha-365000-m04:/home/docker/cp-test_ha-365000-m03_ha-365000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m04 "sudo cat /home/docker/cp-test_ha-365000-m03_ha-365000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp testdata/cp-test.txt ha-365000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1803800116/001/cp-test_ha-365000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m04:/home/docker/cp-test.txt ha-365000:/home/docker/cp-test_ha-365000-m04_ha-365000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000 "sudo cat /home/docker/cp-test_ha-365000-m04_ha-365000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m04:/home/docker/cp-test.txt ha-365000-m02:/home/docker/cp-test_ha-365000-m04_ha-365000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m02 "sudo cat /home/docker/cp-test_ha-365000-m04_ha-365000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 cp ha-365000-m04:/home/docker/cp-test.txt ha-365000-m03:/home/docker/cp-test_ha-365000-m04_ha-365000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-365000 ssh -n ha-365000-m03 "sudo cat /home/docker/cp-test_ha-365000-m04_ha-365000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0729 16:16:18.250281    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
E0729 16:17:41.313083    1714 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19348-1218/.minikube/profiles/functional-905000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.102005833s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-613000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-613000 --output=json --user=testUser: (3.33606075s)
--- PASS: TestJSONOutput/stop/Command (3.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-451000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-451000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.8095ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"759133b3-e86a-4805-81f3-afb9d50856d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-451000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac23bb78-cf33-4dd4-bb84-382e26f554be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19348"}}
	{"specversion":"1.0","id":"87304117-b9e8-47f7-9123-a061020177ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig"}}
	{"specversion":"1.0","id":"bc88ecc2-8bf9-44a5-a68c-a6457cf9f38e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"29eaf4a6-3326-400c-930b-3476c524225c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c068e72-57b6-4025-9dc3-882a587d7d67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube"}}
	{"specversion":"1.0","id":"950cf62d-ff38-4035-afa8-73825993a928","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0ff96c21-a1e2-4f5a-9c3f-8b076ed0fbe3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-451000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-451000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-365000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-365000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.85375ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-365000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19348-1218/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19348-1218/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-365000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-365000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.796875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-365000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-365000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.600612167s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.664824625s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-365000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-365000: (3.547344625s)
--- PASS: TestNoKubernetes/serial/Stop (3.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-365000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-365000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.889583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-365000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-365000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-170000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-004000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-004000 --alsologtostderr -v=3: (1.820134541s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-004000 -n old-k8s-version-004000: exit status 7 (37.23ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-004000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-814000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-814000 --alsologtostderr -v=3: (3.009858458s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-814000 -n no-preload-814000: exit status 7 (32.53225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-814000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-449000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-449000 --alsologtostderr -v=3: (3.201438375s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-449000 -n embed-certs-449000: exit status 7 (56.168125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-449000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-770000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-770000 --alsologtostderr -v=3: (3.536521459s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-770000 -n default-k8s-diff-port-770000: exit status 7 (64.123792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-770000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-028000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-028000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-028000 --alsologtostderr -v=3: (3.723809167s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-028000 -n newest-cni-028000: exit status 7 (62.688208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-028000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/278)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-295000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-295000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-295000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-295000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-295000"

                                                
                                                
----------------------- debugLogs end: cilium-295000 [took: 2.29104925s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-295000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-295000
--- SKIP: TestNetworkPlugins/group/cilium (2.40s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-772000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-772000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard