Test Report: QEMU_macOS 19429

                    
                      b06913c07d6338950e5c7fdbd8346c60c9653ed1:2024-08-13:35775
                    
                

Test fail (97/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.95
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.88
46 TestCertOptions 10.1
47 TestCertExpiration 195.29
48 TestDockerFlags 10.07
49 TestForceSystemdFlag 10.04
50 TestForceSystemdEnv 10.65
95 TestFunctional/parallel/ServiceCmdConnect 29.73
167 TestMultiControlPlane/serial/StopSecondaryNode 214.14
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 104.21
169 TestMultiControlPlane/serial/RestartSecondaryNode 208.76
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 283.49
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.04
174 TestMultiControlPlane/serial/StopCluster 251.16
175 TestMultiControlPlane/serial/RestartCluster 5.26
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.06
184 TestJSONOutput/start/Command 9.77
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.15
216 TestMountStart/serial/StartWithMountFirst 10.25
219 TestMultiNode/serial/FreshStart2Nodes 9.98
220 TestMultiNode/serial/DeployApp2Nodes 116.85
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 45.28
228 TestMultiNode/serial/RestartKeepsNodes 8.8
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 3.34
231 TestMultiNode/serial/RestartMultiNode 5.25
232 TestMultiNode/serial/ValidateNameConflict 20.12
236 TestPreload 9.96
238 TestScheduledStopUnix 9.97
239 TestSkaffold 12.66
242 TestRunningBinaryUpgrade 603.93
244 TestKubernetesUpgrade 18.5
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.5
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.03
260 TestStoppedBinaryUpgrade/Upgrade 571.95
262 TestPause/serial/Start 9.99
272 TestNoKubernetes/serial/StartWithK8s 9.91
273 TestNoKubernetes/serial/StartWithStopK8s 5.43
274 TestNoKubernetes/serial/Start 5.34
278 TestNoKubernetes/serial/StartNoArgs 5.29
280 TestNetworkPlugins/group/auto/Start 9.88
281 TestNetworkPlugins/group/kindnet/Start 9.85
282 TestNetworkPlugins/group/flannel/Start 9.75
283 TestNetworkPlugins/group/enable-default-cni/Start 9.86
284 TestNetworkPlugins/group/bridge/Start 9.85
285 TestNetworkPlugins/group/kubenet/Start 9.75
286 TestNetworkPlugins/group/custom-flannel/Start 9.79
287 TestNetworkPlugins/group/calico/Start 9.83
288 TestNetworkPlugins/group/false/Start 9.78
291 TestStartStop/group/old-k8s-version/serial/FirstStart 9.82
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 9.89
304 TestStartStop/group/embed-certs/serial/FirstStart 9.93
305 TestStartStop/group/no-preload/serial/DeployApp 0.09
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.1
309 TestStartStop/group/no-preload/serial/SecondStart 6.11
310 TestStartStop/group/embed-certs/serial/DeployApp 0.09
311 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
314 TestStartStop/group/embed-certs/serial/SecondStart 5.89
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.05
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
318 TestStartStop/group/no-preload/serial/Pause 0.1
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.97
321 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
323 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
324 TestStartStop/group/embed-certs/serial/Pause 0.1
326 TestStartStop/group/newest-cni/serial/FirstStart 9.93
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.25
336 TestStartStop/group/newest-cni/serial/SecondStart 5.25
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (16.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-133000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-133000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (16.949399334s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"91adc600-bf65-4f06-b3e0-4c6c8ebd335a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-133000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"95cffac6-83d5-46d1-a8de-bfa363d3a476","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19429"}}
	{"specversion":"1.0","id":"c4aca9e4-ee4d-4a3e-85d8-ae211d82e2ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig"}}
	{"specversion":"1.0","id":"189c08d1-dc51-4173-9036-0890eaa44da8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ec4a57b0-bd9b-427f-86e4-fcc7a6551d03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b1faa839-b09f-404f-963b-67a776259247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube"}}
	{"specversion":"1.0","id":"02d3046e-bb6a-4103-9fec-362051763c5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"478fa168-31e7-4356-9161-5cbfe4730e22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"29ff302a-84a9-4a93-9652-59c95229712f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"1b1fe6bf-4b2d-4b52-951b-92a2a8d9ed69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e44563b6-0c88-4869-95bf-a0e43e7f9829","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-133000\" primary control-plane node in \"download-only-133000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d3b1ee0-7b67-41c2-9447-d5e0b05de3f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"519d12a9-9893-4c42-b34e-e0b3733c6942","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105813920 0x105813920 0x105813920 0x105813920 0x105813920 0x105813920 0x105813920] Decompressors:map[bz2:0x1400013c7b0 gz:0x1400013c7b8 tar:0x1400013c720 tar.bz2:0x1400013c730 tar.gz:0x1400013c740 tar.xz:0x1400013c770 tar.zst:0x1400013c7a0 tbz2:0x1400013c730 tgz:0x14
00013c740 txz:0x1400013c770 tzst:0x1400013c7a0 xz:0x1400013c7f0 zip:0x1400013c9d0 zst:0x1400013c7f8] Getters:map[file:0x1400090f1e0 http:0x140000b4230 https:0x140000b44b0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"3a6b9bb7-ba65-4ff1-a687-9f6d6b5babb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 16:46:00.636355    1637 out.go:291] Setting OutFile to fd 1 ...
	I0813 16:46:00.636527    1637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:46:00.636531    1637 out.go:304] Setting ErrFile to fd 2...
	I0813 16:46:00.636533    1637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:46:00.636669    1637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	W0813 16:46:00.636763    1637 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19429-1127/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19429-1127/.minikube/config/config.json: no such file or directory
	I0813 16:46:00.638094    1637 out.go:298] Setting JSON to true
	I0813 16:46:00.655393    1637 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":924,"bootTime":1723591836,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 16:46:00.655462    1637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 16:46:00.660012    1637 out.go:97] [download-only-133000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 16:46:00.660120    1637 notify.go:220] Checking for updates...
	W0813 16:46:00.660157    1637 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball: no such file or directory
	I0813 16:46:00.664005    1637 out.go:169] MINIKUBE_LOCATION=19429
	I0813 16:46:00.667087    1637 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 16:46:00.672006    1637 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 16:46:00.674998    1637 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 16:46:00.678054    1637 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	W0813 16:46:00.683997    1637 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0813 16:46:00.684221    1637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 16:46:00.689059    1637 out.go:97] Using the qemu2 driver based on user configuration
	I0813 16:46:00.689079    1637 start.go:297] selected driver: qemu2
	I0813 16:46:00.689094    1637 start.go:901] validating driver "qemu2" against <nil>
	I0813 16:46:00.689199    1637 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 16:46:00.691904    1637 out.go:169] Automatically selected the socket_vmnet network
	I0813 16:46:00.697695    1637 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0813 16:46:00.697796    1637 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 16:46:00.697936    1637 cni.go:84] Creating CNI manager for ""
	I0813 16:46:00.697958    1637 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0813 16:46:00.698008    1637 start.go:340] cluster config:
	{Name:download-only-133000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-133000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 16:46:00.703795    1637 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 16:46:00.708085    1637 out.go:97] Downloading VM boot image ...
	I0813 16:46:00.708105    1637 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso
	I0813 16:46:08.348426    1637 out.go:97] Starting "download-only-133000" primary control-plane node in "download-only-133000" cluster
	I0813 16:46:08.348461    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0813 16:46:08.414771    1637 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0813 16:46:08.414778    1637 cache.go:56] Caching tarball of preloaded images
	I0813 16:46:08.414959    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0813 16:46:08.419133    1637 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0813 16:46:08.419140    1637 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0813 16:46:08.505027    1637 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0813 16:46:16.419141    1637 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0813 16:46:16.419310    1637 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0813 16:46:17.114903    1637 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0813 16:46:17.115118    1637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/download-only-133000/config.json ...
	I0813 16:46:17.115137    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/download-only-133000/config.json: {Name:mk1e307aa0132670a13c259e2d7d9e8dbfa93103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 16:46:17.115387    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0813 16:46:17.115581    1637 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0813 16:46:17.509690    1637 out.go:169] 
	W0813 16:46:17.515887    1637 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105813920 0x105813920 0x105813920 0x105813920 0x105813920 0x105813920 0x105813920] Decompressors:map[bz2:0x1400013c7b0 gz:0x1400013c7b8 tar:0x1400013c720 tar.bz2:0x1400013c730 tar.gz:0x1400013c740 tar.xz:0x1400013c770 tar.zst:0x1400013c7a0 tbz2:0x1400013c730 tgz:0x1400013c740 txz:0x1400013c770 tzst:0x1400013c7a0 xz:0x1400013c7f0 zip:0x1400013c9d0 zst:0x1400013c7f8] Getters:map[file:0x1400090f1e0 http:0x140000b4230 https:0x140000b44b0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0813 16:46:17.515913    1637 out_reason.go:110] 
	W0813 16:46:17.523726    1637 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 16:46:17.527739    1637 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-133000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (16.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-628000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-628000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.732354209s)

                                                
                                                
-- stdout --
	* [offline-docker-628000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-628000" primary control-plane node in "offline-docker-628000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-628000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:24:32.412080    3844 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:24:32.412209    3844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:24:32.412213    3844 out.go:304] Setting ErrFile to fd 2...
	I0813 17:24:32.412215    3844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:24:32.412336    3844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:24:32.413353    3844 out.go:298] Setting JSON to false
	I0813 17:24:32.431125    3844 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3236,"bootTime":1723591836,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:24:32.431189    3844 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:24:32.435761    3844 out.go:177] * [offline-docker-628000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:24:32.443653    3844 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:24:32.443666    3844 notify.go:220] Checking for updates...
	I0813 17:24:32.453570    3844 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:24:32.456624    3844 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:24:32.459594    3844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:24:32.462585    3844 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:24:32.465670    3844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:24:32.469049    3844 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:24:32.469113    3844 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:24:32.473614    3844 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:24:32.480610    3844 start.go:297] selected driver: qemu2
	I0813 17:24:32.480625    3844 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:24:32.480637    3844 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:24:32.482703    3844 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:24:32.485560    3844 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:24:32.488701    3844 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:24:32.488739    3844 cni.go:84] Creating CNI manager for ""
	I0813 17:24:32.488749    3844 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:24:32.488757    3844 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:24:32.488802    3844 start.go:340] cluster config:
	{Name:offline-docker-628000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-628000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:24:32.492456    3844 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:24:32.499540    3844 out.go:177] * Starting "offline-docker-628000" primary control-plane node in "offline-docker-628000" cluster
	I0813 17:24:32.503458    3844 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:24:32.503486    3844 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:24:32.503503    3844 cache.go:56] Caching tarball of preloaded images
	I0813 17:24:32.503577    3844 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:24:32.503582    3844 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:24:32.503657    3844 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/offline-docker-628000/config.json ...
	I0813 17:24:32.503666    3844 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/offline-docker-628000/config.json: {Name:mk0b2037aa6a0ce9c4b9b478033a6debc5fb3da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:24:32.503942    3844 start.go:360] acquireMachinesLock for offline-docker-628000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:24:32.503973    3844 start.go:364] duration metric: took 24.959µs to acquireMachinesLock for "offline-docker-628000"
	I0813 17:24:32.503984    3844 start.go:93] Provisioning new machine with config: &{Name:offline-docker-628000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-628000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:24:32.504025    3844 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:24:32.511372    3844 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 17:24:32.527203    3844 start.go:159] libmachine.API.Create for "offline-docker-628000" (driver="qemu2")
	I0813 17:24:32.527233    3844 client.go:168] LocalClient.Create starting
	I0813 17:24:32.527308    3844 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:24:32.527338    3844 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:32.527347    3844 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:32.527393    3844 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:24:32.527416    3844 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:32.527428    3844 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:32.527801    3844 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:24:32.708230    3844 main.go:141] libmachine: Creating SSH key...
	I0813 17:24:32.737996    3844 main.go:141] libmachine: Creating Disk image...
	I0813 17:24:32.738006    3844 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:24:32.738212    3844 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2
	I0813 17:24:32.748255    3844 main.go:141] libmachine: STDOUT: 
	I0813 17:24:32.748277    3844 main.go:141] libmachine: STDERR: 
	I0813 17:24:32.748340    3844 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2 +20000M
	I0813 17:24:32.757139    3844 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:24:32.757168    3844 main.go:141] libmachine: STDERR: 
	I0813 17:24:32.757187    3844 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2
	I0813 17:24:32.757192    3844 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:24:32.757214    3844 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:24:32.757246    3844 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:8d:98:94:97:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2
	I0813 17:24:32.758980    3844 main.go:141] libmachine: STDOUT: 
	I0813 17:24:32.758998    3844 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:24:32.759018    3844 client.go:171] duration metric: took 231.78275ms to LocalClient.Create
	I0813 17:24:34.759069    3844 start.go:128] duration metric: took 2.255065958s to createHost
	I0813 17:24:34.759098    3844 start.go:83] releasing machines lock for "offline-docker-628000", held for 2.255152917s
	W0813 17:24:34.759125    3844 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:34.764372    3844 out.go:177] * Deleting "offline-docker-628000" in qemu2 ...
	W0813 17:24:34.781173    3844 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:34.781188    3844 start.go:729] Will try again in 5 seconds ...
	I0813 17:24:39.783425    3844 start.go:360] acquireMachinesLock for offline-docker-628000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:24:39.783860    3844 start.go:364] duration metric: took 338.125µs to acquireMachinesLock for "offline-docker-628000"
	I0813 17:24:39.784001    3844 start.go:93] Provisioning new machine with config: &{Name:offline-docker-628000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-628000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:24:39.784261    3844 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:24:39.793922    3844 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 17:24:39.844140    3844 start.go:159] libmachine.API.Create for "offline-docker-628000" (driver="qemu2")
	I0813 17:24:39.844188    3844 client.go:168] LocalClient.Create starting
	I0813 17:24:39.844300    3844 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:24:39.844370    3844 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:39.844389    3844 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:39.844457    3844 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:24:39.844511    3844 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:39.844525    3844 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:39.845034    3844 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:24:40.007481    3844 main.go:141] libmachine: Creating SSH key...
	I0813 17:24:40.045292    3844 main.go:141] libmachine: Creating Disk image...
	I0813 17:24:40.045298    3844 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:24:40.045509    3844 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2
	I0813 17:24:40.064824    3844 main.go:141] libmachine: STDOUT: 
	I0813 17:24:40.064842    3844 main.go:141] libmachine: STDERR: 
	I0813 17:24:40.064909    3844 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2 +20000M
	I0813 17:24:40.073067    3844 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:24:40.073091    3844 main.go:141] libmachine: STDERR: 
	I0813 17:24:40.073103    3844 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2
	I0813 17:24:40.073108    3844 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:24:40.073119    3844 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:24:40.073147    3844 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:fd:ac:5d:05:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/offline-docker-628000/disk.qcow2
	I0813 17:24:40.074613    3844 main.go:141] libmachine: STDOUT: 
	I0813 17:24:40.074632    3844 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:24:40.074646    3844 client.go:171] duration metric: took 230.455625ms to LocalClient.Create
	I0813 17:24:42.076792    3844 start.go:128] duration metric: took 2.292527125s to createHost
	I0813 17:24:42.076940    3844 start.go:83] releasing machines lock for "offline-docker-628000", held for 2.29308775s
	W0813 17:24:42.077352    3844 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-628000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-628000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:42.086747    3844 out.go:177] 
	W0813 17:24:42.090867    3844 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:24:42.090923    3844 out.go:239] * 
	* 
	W0813 17:24:42.093912    3844 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:24:42.103806    3844 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-628000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-13 17:24:42.117553 -0700 PDT m=+2321.553315917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-628000 -n offline-docker-628000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-628000 -n offline-docker-628000: exit status 7 (67.397917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-628000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-628000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-628000
--- FAIL: TestOffline (9.88s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-114000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-114000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.8309555s)

                                                
                                                
-- stdout --
	* [cert-options-114000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-114000" primary control-plane node in "cert-options-114000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-114000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-114000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-114000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-114000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-114000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.116667ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-114000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-114000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-114000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-114000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-114000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-114000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.37125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-114000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-114000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-114000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-114000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-114000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-13 17:25:12.983106 -0700 PDT m=+2352.419315167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-114000 -n cert-options-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-114000 -n cert-options-114000: exit status 7 (30.502125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-114000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-114000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-114000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-967000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-967000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.936902833s)

                                                
                                                
-- stdout --
	* [cert-expiration-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-967000" primary control-plane node in "cert-expiration-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-967000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-967000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-967000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.226142458s)

                                                
                                                
-- stdout --
	* [cert-expiration-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-967000" primary control-plane node in "cert-expiration-967000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-967000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-967000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-967000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-967000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-967000" primary control-plane node in "cert-expiration-967000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-967000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-967000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-967000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-13 17:28:13.055735 -0700 PDT m=+2532.494553876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-967000 -n cert-expiration-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-967000 -n cert-expiration-967000: exit status 7 (44.298542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-967000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-967000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-967000
--- FAIL: TestCertExpiration (195.29s)

                                                
                                    
x
+
TestDockerFlags (10.07s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-903000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-903000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.834144125s)

                                                
                                                
-- stdout --
	* [docker-flags-903000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-903000" primary control-plane node in "docker-flags-903000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-903000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:24:52.946348    4041 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:24:52.946475    4041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:24:52.946478    4041 out.go:304] Setting ErrFile to fd 2...
	I0813 17:24:52.946481    4041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:24:52.946618    4041 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:24:52.947746    4041 out.go:298] Setting JSON to false
	I0813 17:24:52.963963    4041 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3256,"bootTime":1723591836,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:24:52.964027    4041 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:24:52.970161    4041 out.go:177] * [docker-flags-903000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:24:52.978030    4041 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:24:52.978087    4041 notify.go:220] Checking for updates...
	I0813 17:24:52.985975    4041 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:24:52.989000    4041 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:24:52.991965    4041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:24:52.997860    4041 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:24:53.004930    4041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:24:53.009284    4041 config.go:182] Loaded profile config "force-systemd-flag-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:24:53.009356    4041 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:24:53.009442    4041 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:24:53.013023    4041 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:24:53.018956    4041 start.go:297] selected driver: qemu2
	I0813 17:24:53.018962    4041 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:24:53.018968    4041 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:24:53.021323    4041 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:24:53.023955    4041 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:24:53.026969    4041 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0813 17:24:53.027008    4041 cni.go:84] Creating CNI manager for ""
	I0813 17:24:53.027015    4041 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:24:53.027020    4041 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:24:53.027052    4041 start.go:340] cluster config:
	{Name:docker-flags-903000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:24:53.031081    4041 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:24:53.038837    4041 out.go:177] * Starting "docker-flags-903000" primary control-plane node in "docker-flags-903000" cluster
	I0813 17:24:53.043000    4041 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:24:53.043017    4041 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:24:53.043028    4041 cache.go:56] Caching tarball of preloaded images
	I0813 17:24:53.043098    4041 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:24:53.043105    4041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:24:53.043186    4041 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/docker-flags-903000/config.json ...
	I0813 17:24:53.043201    4041 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/docker-flags-903000/config.json: {Name:mkf047d6d12c026d7044fe8b6ad45e150992f14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:24:53.043422    4041 start.go:360] acquireMachinesLock for docker-flags-903000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:24:53.043457    4041 start.go:364] duration metric: took 29.292µs to acquireMachinesLock for "docker-flags-903000"
	I0813 17:24:53.043471    4041 start.go:93] Provisioning new machine with config: &{Name:docker-flags-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:24:53.043498    4041 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:24:53.048927    4041 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 17:24:53.067049    4041 start.go:159] libmachine.API.Create for "docker-flags-903000" (driver="qemu2")
	I0813 17:24:53.067083    4041 client.go:168] LocalClient.Create starting
	I0813 17:24:53.067142    4041 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:24:53.067172    4041 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:53.067181    4041 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:53.067218    4041 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:24:53.067241    4041 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:53.067248    4041 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:53.067646    4041 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:24:53.214694    4041 main.go:141] libmachine: Creating SSH key...
	I0813 17:24:53.298213    4041 main.go:141] libmachine: Creating Disk image...
	I0813 17:24:53.298219    4041 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:24:53.298421    4041 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2
	I0813 17:24:53.307719    4041 main.go:141] libmachine: STDOUT: 
	I0813 17:24:53.307738    4041 main.go:141] libmachine: STDERR: 
	I0813 17:24:53.307780    4041 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2 +20000M
	I0813 17:24:53.315658    4041 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:24:53.315677    4041 main.go:141] libmachine: STDERR: 
	I0813 17:24:53.315694    4041 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2
	I0813 17:24:53.315699    4041 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:24:53.315710    4041 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:24:53.315756    4041 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:f0:7c:b0:6a:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2
	I0813 17:24:53.317373    4041 main.go:141] libmachine: STDOUT: 
	I0813 17:24:53.317389    4041 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:24:53.317408    4041 client.go:171] duration metric: took 250.323375ms to LocalClient.Create
	I0813 17:24:55.319581    4041 start.go:128] duration metric: took 2.276091708s to createHost
	I0813 17:24:55.319646    4041 start.go:83] releasing machines lock for "docker-flags-903000", held for 2.276210875s
	W0813 17:24:55.319705    4041 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:55.332181    4041 out.go:177] * Deleting "docker-flags-903000" in qemu2 ...
	W0813 17:24:55.371567    4041 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:55.371589    4041 start.go:729] Will try again in 5 seconds ...
	I0813 17:25:00.373683    4041 start.go:360] acquireMachinesLock for docker-flags-903000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:25:00.374033    4041 start.go:364] duration metric: took 272.458µs to acquireMachinesLock for "docker-flags-903000"
	I0813 17:25:00.374098    4041 start.go:93] Provisioning new machine with config: &{Name:docker-flags-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:25:00.374346    4041 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:25:00.394527    4041 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 17:25:00.442423    4041 start.go:159] libmachine.API.Create for "docker-flags-903000" (driver="qemu2")
	I0813 17:25:00.442473    4041 client.go:168] LocalClient.Create starting
	I0813 17:25:00.442603    4041 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:25:00.442664    4041 main.go:141] libmachine: Decoding PEM data...
	I0813 17:25:00.442681    4041 main.go:141] libmachine: Parsing certificate...
	I0813 17:25:00.442769    4041 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:25:00.442817    4041 main.go:141] libmachine: Decoding PEM data...
	I0813 17:25:00.442827    4041 main.go:141] libmachine: Parsing certificate...
	I0813 17:25:00.443594    4041 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:25:00.605853    4041 main.go:141] libmachine: Creating SSH key...
	I0813 17:25:00.677352    4041 main.go:141] libmachine: Creating Disk image...
	I0813 17:25:00.677359    4041 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:25:00.677563    4041 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2
	I0813 17:25:00.687052    4041 main.go:141] libmachine: STDOUT: 
	I0813 17:25:00.687071    4041 main.go:141] libmachine: STDERR: 
	I0813 17:25:00.687112    4041 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2 +20000M
	I0813 17:25:00.695152    4041 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:25:00.695169    4041 main.go:141] libmachine: STDERR: 
	I0813 17:25:00.695179    4041 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2
	I0813 17:25:00.695183    4041 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:25:00.695192    4041 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:25:00.695228    4041 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:13:da:0b:79:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/docker-flags-903000/disk.qcow2
	I0813 17:25:00.696885    4041 main.go:141] libmachine: STDOUT: 
	I0813 17:25:00.696901    4041 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:25:00.696912    4041 client.go:171] duration metric: took 254.435917ms to LocalClient.Create
	I0813 17:25:02.699109    4041 start.go:128] duration metric: took 2.324761625s to createHost
	I0813 17:25:02.699185    4041 start.go:83] releasing machines lock for "docker-flags-903000", held for 2.325164667s
	W0813 17:25:02.699644    4041 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-903000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-903000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:25:02.717521    4041 out.go:177] 
	W0813 17:25:02.725155    4041 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:25:02.725185    4041 out.go:239] * 
	* 
	W0813 17:25:02.727683    4041 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:25:02.738216    4041 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-903000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-903000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-903000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (80.791292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-903000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-903000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-903000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-903000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-903000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-903000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-903000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-903000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-903000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.591417ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-903000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-903000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-903000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-903000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-903000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-903000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-13 17:25:02.883094 -0700 PDT m=+2342.319157542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-903000 -n docker-flags-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-903000 -n docker-flags-903000: exit status 7 (28.807416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-903000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-903000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-903000
--- FAIL: TestDockerFlags (10.07s)

                                                
                                    
x
+
TestForceSystemdFlag (10.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-365000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-365000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.8501135s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-365000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-365000" primary control-plane node in "force-systemd-flag-365000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-365000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:24:47.879545    4018 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:24:47.879663    4018 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:24:47.879667    4018 out.go:304] Setting ErrFile to fd 2...
	I0813 17:24:47.879670    4018 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:24:47.879812    4018 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:24:47.880875    4018 out.go:298] Setting JSON to false
	I0813 17:24:47.896623    4018 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3251,"bootTime":1723591836,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:24:47.896698    4018 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:24:47.902839    4018 out.go:177] * [force-systemd-flag-365000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:24:47.910812    4018 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:24:47.910867    4018 notify.go:220] Checking for updates...
	I0813 17:24:47.918727    4018 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:24:47.922729    4018 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:24:47.925820    4018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:24:47.928815    4018 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:24:47.931804    4018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:24:47.935154    4018 config.go:182] Loaded profile config "force-systemd-env-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:24:47.935222    4018 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:24:47.935268    4018 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:24:47.939796    4018 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:24:47.946856    4018 start.go:297] selected driver: qemu2
	I0813 17:24:47.946864    4018 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:24:47.946872    4018 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:24:47.949159    4018 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:24:47.952789    4018 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:24:47.955957    4018 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 17:24:47.955974    4018 cni.go:84] Creating CNI manager for ""
	I0813 17:24:47.955981    4018 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:24:47.955989    4018 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:24:47.956020    4018 start.go:340] cluster config:
	{Name:force-systemd-flag-365000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:24:47.959651    4018 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:24:47.966808    4018 out.go:177] * Starting "force-systemd-flag-365000" primary control-plane node in "force-systemd-flag-365000" cluster
	I0813 17:24:47.970816    4018 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:24:47.970831    4018 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:24:47.970838    4018 cache.go:56] Caching tarball of preloaded images
	I0813 17:24:47.970894    4018 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:24:47.970900    4018 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:24:47.970957    4018 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/force-systemd-flag-365000/config.json ...
	I0813 17:24:47.970969    4018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/force-systemd-flag-365000/config.json: {Name:mk435843f5a4bb7f44e020070cdf33e85dd5b913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:24:47.971188    4018 start.go:360] acquireMachinesLock for force-systemd-flag-365000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:24:47.971226    4018 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "force-systemd-flag-365000"
	I0813 17:24:47.971240    4018 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:24:47.971268    4018 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:24:47.978759    4018 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 17:24:47.996665    4018 start.go:159] libmachine.API.Create for "force-systemd-flag-365000" (driver="qemu2")
	I0813 17:24:47.996689    4018 client.go:168] LocalClient.Create starting
	I0813 17:24:47.996746    4018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:24:47.996776    4018 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:47.996784    4018 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:47.996816    4018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:24:47.996838    4018 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:47.996846    4018 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:47.997215    4018 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:24:48.143914    4018 main.go:141] libmachine: Creating SSH key...
	I0813 17:24:48.248943    4018 main.go:141] libmachine: Creating Disk image...
	I0813 17:24:48.248948    4018 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:24:48.249144    4018 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2
	I0813 17:24:48.258306    4018 main.go:141] libmachine: STDOUT: 
	I0813 17:24:48.258323    4018 main.go:141] libmachine: STDERR: 
	I0813 17:24:48.258368    4018 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2 +20000M
	I0813 17:24:48.266214    4018 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:24:48.266230    4018 main.go:141] libmachine: STDERR: 
	I0813 17:24:48.266245    4018 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2
	I0813 17:24:48.266249    4018 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:24:48.266260    4018 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:24:48.266295    4018 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:da:5d:5f:f9:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2
	I0813 17:24:48.267852    4018 main.go:141] libmachine: STDOUT: 
	I0813 17:24:48.267868    4018 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:24:48.267887    4018 client.go:171] duration metric: took 271.196292ms to LocalClient.Create
	I0813 17:24:50.270020    4018 start.go:128] duration metric: took 2.298764583s to createHost
	I0813 17:24:50.270073    4018 start.go:83] releasing machines lock for "force-systemd-flag-365000", held for 2.298869375s
	W0813 17:24:50.270129    4018 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:50.284473    4018 out.go:177] * Deleting "force-systemd-flag-365000" in qemu2 ...
	W0813 17:24:50.322483    4018 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:50.322506    4018 start.go:729] Will try again in 5 seconds ...
	I0813 17:24:55.324544    4018 start.go:360] acquireMachinesLock for force-systemd-flag-365000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:24:55.324945    4018 start.go:364] duration metric: took 306.458µs to acquireMachinesLock for "force-systemd-flag-365000"
	I0813 17:24:55.325018    4018 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:24:55.325206    4018 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:24:55.340993    4018 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 17:24:55.390703    4018 start.go:159] libmachine.API.Create for "force-systemd-flag-365000" (driver="qemu2")
	I0813 17:24:55.390760    4018 client.go:168] LocalClient.Create starting
	I0813 17:24:55.390882    4018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:24:55.390950    4018 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:55.390967    4018 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:55.391032    4018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:24:55.391075    4018 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:55.391087    4018 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:55.391788    4018 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:24:55.552839    4018 main.go:141] libmachine: Creating SSH key...
	I0813 17:24:55.628704    4018 main.go:141] libmachine: Creating Disk image...
	I0813 17:24:55.628709    4018 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:24:55.628887    4018 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2
	I0813 17:24:55.638265    4018 main.go:141] libmachine: STDOUT: 
	I0813 17:24:55.638283    4018 main.go:141] libmachine: STDERR: 
	I0813 17:24:55.638341    4018 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2 +20000M
	I0813 17:24:55.646230    4018 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:24:55.646251    4018 main.go:141] libmachine: STDERR: 
	I0813 17:24:55.646263    4018 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2
	I0813 17:24:55.646268    4018 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:24:55.646280    4018 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:24:55.646306    4018 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:be:58:d0:23:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-flag-365000/disk.qcow2
	I0813 17:24:55.647946    4018 main.go:141] libmachine: STDOUT: 
	I0813 17:24:55.647961    4018 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:24:55.647973    4018 client.go:171] duration metric: took 257.210834ms to LocalClient.Create
	I0813 17:24:57.650118    4018 start.go:128] duration metric: took 2.324916083s to createHost
	I0813 17:24:57.650178    4018 start.go:83] releasing machines lock for "force-systemd-flag-365000", held for 2.32522975s
	W0813 17:24:57.650543    4018 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-365000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-365000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:57.668887    4018 out.go:177] 
	W0813 17:24:57.676113    4018 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:24:57.676152    4018 out.go:239] * 
	* 
	W0813 17:24:57.678668    4018 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:24:57.688981    4018 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-365000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-365000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-365000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.297167ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-365000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-365000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-365000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-13 17:24:57.783973 -0700 PDT m=+2337.219962667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-365000 -n force-systemd-flag-365000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-365000 -n force-systemd-flag-365000: exit status 7 (33.70025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-365000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-365000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-365000
--- FAIL: TestForceSystemdFlag (10.04s)

                                                
                                    
x
+
TestForceSystemdEnv (10.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-815000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-815000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.467753666s)

                                                
                                                
-- stdout --
	* [force-systemd-env-815000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-815000" primary control-plane node in "force-systemd-env-815000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-815000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:24:42.292902    3983 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:24:42.293031    3983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:24:42.293035    3983 out.go:304] Setting ErrFile to fd 2...
	I0813 17:24:42.293037    3983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:24:42.293176    3983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:24:42.294236    3983 out.go:298] Setting JSON to false
	I0813 17:24:42.310647    3983 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3246,"bootTime":1723591836,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:24:42.310721    3983 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:24:42.317054    3983 out.go:177] * [force-systemd-env-815000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:24:42.326125    3983 notify.go:220] Checking for updates...
	I0813 17:24:42.330031    3983 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:24:42.338069    3983 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:24:42.345967    3983 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:24:42.355029    3983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:24:42.362004    3983 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:24:42.370049    3983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0813 17:24:42.374285    3983 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:24:42.374343    3983 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:24:42.378069    3983 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:24:42.384919    3983 start.go:297] selected driver: qemu2
	I0813 17:24:42.384927    3983 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:24:42.384940    3983 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:24:42.387164    3983 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:24:42.391011    3983 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:24:42.395160    3983 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 17:24:42.395178    3983 cni.go:84] Creating CNI manager for ""
	I0813 17:24:42.395186    3983 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:24:42.395193    3983 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:24:42.395220    3983 start.go:340] cluster config:
	{Name:force-systemd-env-815000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-815000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:24:42.398761    3983 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:24:42.406061    3983 out.go:177] * Starting "force-systemd-env-815000" primary control-plane node in "force-systemd-env-815000" cluster
	I0813 17:24:42.409922    3983 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:24:42.409936    3983 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:24:42.409943    3983 cache.go:56] Caching tarball of preloaded images
	I0813 17:24:42.409999    3983 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:24:42.410005    3983 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:24:42.410053    3983 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/force-systemd-env-815000/config.json ...
	I0813 17:24:42.410064    3983 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/force-systemd-env-815000/config.json: {Name:mk9f4b430ca247a23f0faeb8cb19ecf5b87430fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:24:42.410258    3983 start.go:360] acquireMachinesLock for force-systemd-env-815000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:24:42.410293    3983 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "force-systemd-env-815000"
	I0813 17:24:42.410305    3983 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-815000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-815000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:24:42.410331    3983 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:24:42.419008    3983 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 17:24:42.435690    3983 start.go:159] libmachine.API.Create for "force-systemd-env-815000" (driver="qemu2")
	I0813 17:24:42.435711    3983 client.go:168] LocalClient.Create starting
	I0813 17:24:42.435775    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:24:42.435805    3983 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:42.435813    3983 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:42.435851    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:24:42.435874    3983 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:42.435884    3983 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:42.436218    3983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:24:42.629856    3983 main.go:141] libmachine: Creating SSH key...
	I0813 17:24:42.842174    3983 main.go:141] libmachine: Creating Disk image...
	I0813 17:24:42.842189    3983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:24:42.842395    3983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2
	I0813 17:24:42.851999    3983 main.go:141] libmachine: STDOUT: 
	I0813 17:24:42.852023    3983 main.go:141] libmachine: STDERR: 
	I0813 17:24:42.852088    3983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2 +20000M
	I0813 17:24:42.860312    3983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:24:42.860327    3983 main.go:141] libmachine: STDERR: 
	I0813 17:24:42.860350    3983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2
	I0813 17:24:42.860356    3983 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:24:42.860368    3983 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:24:42.860394    3983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:87:c1:44:ba:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2
	I0813 17:24:42.862320    3983 main.go:141] libmachine: STDOUT: 
	I0813 17:24:42.862340    3983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:24:42.862363    3983 client.go:171] duration metric: took 426.649916ms to LocalClient.Create
	I0813 17:24:44.864552    3983 start.go:128] duration metric: took 2.454227209s to createHost
	I0813 17:24:44.864693    3983 start.go:83] releasing machines lock for "force-systemd-env-815000", held for 2.454377792s
	W0813 17:24:44.864742    3983 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:44.871727    3983 out.go:177] * Deleting "force-systemd-env-815000" in qemu2 ...
	W0813 17:24:44.903702    3983 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:44.903732    3983 start.go:729] Will try again in 5 seconds ...
	I0813 17:24:49.905828    3983 start.go:360] acquireMachinesLock for force-systemd-env-815000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:24:50.270209    3983 start.go:364] duration metric: took 364.204917ms to acquireMachinesLock for "force-systemd-env-815000"
	I0813 17:24:50.270418    3983 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-815000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-815000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:24:50.270685    3983 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:24:50.280493    3983 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 17:24:50.330370    3983 start.go:159] libmachine.API.Create for "force-systemd-env-815000" (driver="qemu2")
	I0813 17:24:50.330416    3983 client.go:168] LocalClient.Create starting
	I0813 17:24:50.330541    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:24:50.330599    3983 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:50.330616    3983 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:50.330682    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:24:50.330736    3983 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:50.330746    3983 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:50.334059    3983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:24:50.499466    3983 main.go:141] libmachine: Creating SSH key...
	I0813 17:24:50.664472    3983 main.go:141] libmachine: Creating Disk image...
	I0813 17:24:50.664479    3983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:24:50.664718    3983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2
	I0813 17:24:50.674620    3983 main.go:141] libmachine: STDOUT: 
	I0813 17:24:50.674643    3983 main.go:141] libmachine: STDERR: 
	I0813 17:24:50.674697    3983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2 +20000M
	I0813 17:24:50.682547    3983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:24:50.682570    3983 main.go:141] libmachine: STDERR: 
	I0813 17:24:50.682588    3983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2
	I0813 17:24:50.682592    3983 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:24:50.682601    3983 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:24:50.682632    3983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:00:7b:50:57:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/force-systemd-env-815000/disk.qcow2
	I0813 17:24:50.684248    3983 main.go:141] libmachine: STDOUT: 
	I0813 17:24:50.684266    3983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:24:50.684279    3983 client.go:171] duration metric: took 353.863333ms to LocalClient.Create
	I0813 17:24:52.685464    3983 start.go:128] duration metric: took 2.414774917s to createHost
	I0813 17:24:52.685580    3983 start.go:83] releasing machines lock for "force-systemd-env-815000", held for 2.415358167s
	W0813 17:24:52.685927    3983 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-815000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-815000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:52.704448    3983 out.go:177] 
	W0813 17:24:52.706845    3983 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:24:52.706878    3983 out.go:239] * 
	* 
	W0813 17:24:52.708612    3983 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:24:52.719522    3983 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-815000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-815000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-815000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (73.724042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-815000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-815000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-815000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-13 17:24:52.808163 -0700 PDT m=+2332.244080292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-815000 -n force-systemd-env-815000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-815000 -n force-systemd-env-815000: exit status 7 (34.07125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-815000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-815000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-815000
--- FAIL: TestForceSystemdEnv (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-174000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-174000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-gf56q" [c62e2c3c-05a8-45ae-bd4c-48c40fb59433] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-gf56q" [c62e2c3c-05a8-45ae-bd4c-48c40fb59433] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.011004958s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30951
functional_test.go:1661: error fetching http://192.168.105.4:30951: Get "http://192.168.105.4:30951": dial tcp 192.168.105.4:30951: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30951: Get "http://192.168.105.4:30951": dial tcp 192.168.105.4:30951: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30951: Get "http://192.168.105.4:30951": dial tcp 192.168.105.4:30951: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30951: Get "http://192.168.105.4:30951": dial tcp 192.168.105.4:30951: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30951: Get "http://192.168.105.4:30951": dial tcp 192.168.105.4:30951: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30951: Get "http://192.168.105.4:30951": dial tcp 192.168.105.4:30951: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30951: Get "http://192.168.105.4:30951": dial tcp 192.168.105.4:30951: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30951: Get "http://192.168.105.4:30951": dial tcp 192.168.105.4:30951: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-174000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-gf56q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-174000/192.168.105.4
Start Time:       Tue, 13 Aug 2024 16:55:47 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://0d1dc5475e061dd4ab4bff730a1d7df58a3a95301acca6cc81c5e7c45814e6c3
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 13 Aug 2024 16:56:04 -0700
Finished:     Tue, 13 Aug 2024 16:56:04 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 13 Aug 2024 16:55:52 -0700
Finished:     Tue, 13 Aug 2024 16:55:52 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xt7fh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xt7fh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  28s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-gf56q to functional-174000
Normal   Pulling    28s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     24s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.014s (4.014s including waiting). Image size: 84957542 bytes.
Normal   Created    12s (x3 over 24s)  kubelet            Created container echoserver-arm
Normal   Started    12s (x3 over 24s)  kubelet            Started container echoserver-arm
Normal   Pulled     12s (x2 over 24s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    12s (x3 over 23s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-gf56q_default(c62e2c3c-05a8-45ae-bd4c-48c40fb59433)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-174000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-174000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.197.6
IPs:                      10.108.197.6
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30951/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service | functional-174000                                                                                                   | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | service hello-node --url                                                                                            |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                    |                   |         |         |                     |                     |
	| service | functional-174000 service                                                                                           | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | hello-node --url                                                                                                    |                   |         |         |                     |                     |
	| mount   | -p functional-174000                                                                                                | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2311625679/001:/mount-9p     |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh findmnt                                                                                       | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh -- ls                                                                                         | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh cat                                                                                           | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | /mount-9p/test-1723593368133830000                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh stat                                                                                          | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh stat                                                                                          | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh sudo                                                                                          | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh findmnt                                                                                       | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-174000                                                                                                | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port423540265/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh findmnt                                                                                       | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh -- ls                                                                                         | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh sudo                                                                                          | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount   | -p functional-174000                                                                                                | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187477251/001:/mount1  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-174000                                                                                                | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187477251/001:/mount3  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-174000                                                                                                | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187477251/001:/mount2  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh findmnt                                                                                       | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh findmnt                                                                                       | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh findmnt                                                                                       | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-174000 ssh findmnt                                                                                       | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT | 13 Aug 24 16:56 PDT |
	|         | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount   | -p functional-174000                                                                                                | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start   | -p functional-174000                                                                                                | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|         | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|         | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start   | -p functional-174000                                                                                                | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|         | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|         | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start   | -p functional-174000 --dry-run                                                                                      | functional-174000 | jenkins | v1.33.1 | 13 Aug 24 16:56 PDT |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|         | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/13 16:56:16
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 16:56:16.209588    2283 out.go:291] Setting OutFile to fd 1 ...
	I0813 16:56:16.209738    2283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:56:16.209741    2283 out.go:304] Setting ErrFile to fd 2...
	I0813 16:56:16.209743    2283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:56:16.209866    2283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 16:56:16.211174    2283 out.go:298] Setting JSON to false
	I0813 16:56:16.229195    2283 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1540,"bootTime":1723591836,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 16:56:16.229395    2283 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 16:56:16.233071    2283 out.go:177] * [functional-174000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 16:56:16.240192    2283 notify.go:220] Checking for updates...
	I0813 16:56:16.244032    2283 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 16:56:16.254970    2283 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 16:56:16.266051    2283 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 16:56:16.269035    2283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 16:56:16.275077    2283 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 16:56:16.282987    2283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 16:56:16.287357    2283 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 16:56:16.287599    2283 driver.go:392] Setting default libvirt URI to qemu:///system
	
	
	==> Docker <==
	Aug 13 23:56:10 functional-174000 dockerd[5658]: time="2024-08-13T23:56:10.051297065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 13 23:56:10 functional-174000 dockerd[5658]: time="2024-08-13T23:56:10.051515648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 13 23:56:10 functional-174000 dockerd[5658]: time="2024-08-13T23:56:10.051584481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 13 23:56:10 functional-174000 cri-dockerd[5913]: time="2024-08-13T23:56:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f85fe9b384fcd44e6c0fd00acf54f5fd00f44b58fb8a92b13417a4f51aaecf0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 13 23:56:11 functional-174000 cri-dockerd[5913]: time="2024-08-13T23:56:11Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 13 23:56:11 functional-174000 dockerd[5658]: time="2024-08-13T23:56:11.464101887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 13 23:56:11 functional-174000 dockerd[5658]: time="2024-08-13T23:56:11.464134303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 13 23:56:11 functional-174000 dockerd[5658]: time="2024-08-13T23:56:11.464140011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 13 23:56:11 functional-174000 dockerd[5658]: time="2024-08-13T23:56:11.464180595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 13 23:56:11 functional-174000 dockerd[5652]: time="2024-08-13T23:56:11.497327882Z" level=info msg="ignoring event" container=e3f81a79ad1315f65bb90c399404ac327379a30e7c8e6189dc1d9cd01a8f67f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 13 23:56:11 functional-174000 dockerd[5658]: time="2024-08-13T23:56:11.497548507Z" level=info msg="shim disconnected" id=e3f81a79ad1315f65bb90c399404ac327379a30e7c8e6189dc1d9cd01a8f67f0 namespace=moby
	Aug 13 23:56:11 functional-174000 dockerd[5658]: time="2024-08-13T23:56:11.497583799Z" level=warning msg="cleaning up after shim disconnected" id=e3f81a79ad1315f65bb90c399404ac327379a30e7c8e6189dc1d9cd01a8f67f0 namespace=moby
	Aug 13 23:56:11 functional-174000 dockerd[5658]: time="2024-08-13T23:56:11.497588507Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 13 23:56:13 functional-174000 dockerd[5658]: time="2024-08-13T23:56:13.113007280Z" level=info msg="shim disconnected" id=5f85fe9b384fcd44e6c0fd00acf54f5fd00f44b58fb8a92b13417a4f51aaecf0 namespace=moby
	Aug 13 23:56:13 functional-174000 dockerd[5658]: time="2024-08-13T23:56:13.113050155Z" level=warning msg="cleaning up after shim disconnected" id=5f85fe9b384fcd44e6c0fd00acf54f5fd00f44b58fb8a92b13417a4f51aaecf0 namespace=moby
	Aug 13 23:56:13 functional-174000 dockerd[5658]: time="2024-08-13T23:56:13.113056405Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 13 23:56:13 functional-174000 dockerd[5652]: time="2024-08-13T23:56:13.113174239Z" level=info msg="ignoring event" container=5f85fe9b384fcd44e6c0fd00acf54f5fd00f44b58fb8a92b13417a4f51aaecf0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 13 23:56:14 functional-174000 dockerd[5658]: time="2024-08-13T23:56:14.774216207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 13 23:56:14 functional-174000 dockerd[5658]: time="2024-08-13T23:56:14.774279832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 13 23:56:14 functional-174000 dockerd[5658]: time="2024-08-13T23:56:14.774297457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 13 23:56:14 functional-174000 dockerd[5658]: time="2024-08-13T23:56:14.774358832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 13 23:56:14 functional-174000 dockerd[5652]: time="2024-08-13T23:56:14.799158415Z" level=info msg="ignoring event" container=77085dc03432d21bb21efff889e92e9fee333b60dfb71d828bf5cb9427066a6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 13 23:56:14 functional-174000 dockerd[5658]: time="2024-08-13T23:56:14.799402873Z" level=info msg="shim disconnected" id=77085dc03432d21bb21efff889e92e9fee333b60dfb71d828bf5cb9427066a6b namespace=moby
	Aug 13 23:56:14 functional-174000 dockerd[5658]: time="2024-08-13T23:56:14.799434831Z" level=warning msg="cleaning up after shim disconnected" id=77085dc03432d21bb21efff889e92e9fee333b60dfb71d828bf5cb9427066a6b namespace=moby
	Aug 13 23:56:14 functional-174000 dockerd[5658]: time="2024-08-13T23:56:14.799439373Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	77085dc03432d       72565bf5bbedf                                                                                         2 seconds ago        Exited              echoserver-arm            2                   f1f7580b3dc89       hello-node-64b4f8f9ff-pgqjf
	e3f81a79ad131       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 seconds ago        Exited              mount-munger              0                   5f85fe9b384fc       busybox-mount
	0d1dc5475e061       72565bf5bbedf                                                                                         12 seconds ago       Exited              echoserver-arm            2                   38233ddc48a15       hello-node-connect-65d86f57f4-gf56q
	43b8e6dda923c       nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40                         22 seconds ago       Running             myfrontend                0                   0fc2ae71889f7       sp-pod
	62c9f46923509       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         36 seconds ago       Running             nginx                     0                   9398d4d80e65f       nginx-svc
	407136c983652       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   12b12f549f141       storage-provisioner
	ad10c5da5b2ba       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   337fef2074a04       coredns-6f6b679f8f-lzgm4
	9f094c82472df       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   12b12f549f141       storage-provisioner
	73876ac3fcdd7       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   6abfe93d87c4c       kube-proxy-hsjzf
	7c6a1bd99f214       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   8d6542f2bd296       kube-controller-manager-functional-174000
	151508bb1cd91       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   059cc7735cd1d       etcd-functional-174000
	c827a440123a1       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   4d53a0be27501       kube-scheduler-functional-174000
	8e1c1a709fc1b       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   6605f188e0299       kube-apiserver-functional-174000
	9df74a163f540       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   376fcbed90712       coredns-6f6b679f8f-lzgm4
	81edd347dbfb5       71d55d66fd4ee                                                                                         2 minutes ago        Exited              kube-proxy                1                   aa8b764dc1960       kube-proxy-hsjzf
	fc5da04b3dd75       fbbbd428abb4d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   0fd651500e94b       kube-scheduler-functional-174000
	8b09c53b010d5       fcb0683e6bdbd                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   d37f7690c04ba       kube-controller-manager-functional-174000
	32a116b54ee50       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   286eb3d37a5c0       etcd-functional-174000
	
	
	==> coredns [9df74a163f54] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39522 - 60733 "HINFO IN 1905204583714236781.7698651145293232096. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028157187s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ad10c5da5b2b] <==
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45562 - 2070 "HINFO IN 7215073274235781202.9027602260817581684. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024122823s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2082255342]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-Aug-2024 23:54:53.292) (total time: 30000ms):
	Trace[2082255342]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:55:23.292)
	Trace[2082255342]: [30.000686452s] [30.000686452s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[854366403]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-Aug-2024 23:54:53.292) (total time: 30000ms):
	Trace[854366403]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:55:23.293)
	Trace[854366403]: [30.00099012s] [30.00099012s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1559525090]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (13-Aug-2024 23:54:53.292) (total time: 30001ms):
	Trace[1559525090]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:55:23.293)
	Trace[1559525090]: [30.001212119s] [30.001212119s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.1:39823 - 46237 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000115584s
	[INFO] 10.244.0.1:25611 - 20127 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000159917s
	[INFO] 10.244.0.1:14387 - 50902 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000036041s
	[INFO] 10.244.0.1:43502 - 48349 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001191917s
	[INFO] 10.244.0.1:28461 - 5097 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000055916s
	[INFO] 10.244.0.1:43343 - 34584 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000021667s
	
	
	==> describe nodes <==
	Name:               functional-174000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-174000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=functional-174000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_13T16_53_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Aug 2024 23:53:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-174000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Aug 2024 23:56:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Aug 2024 23:55:53 +0000   Tue, 13 Aug 2024 23:53:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Aug 2024 23:55:53 +0000   Tue, 13 Aug 2024 23:53:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Aug 2024 23:55:53 +0000   Tue, 13 Aug 2024 23:53:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Aug 2024 23:55:53 +0000   Tue, 13 Aug 2024 23:53:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-174000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 426f4424ea474637b1dbdd5028c9361c
	  System UUID:                426f4424ea474637b1dbdd5028c9361c
	  Boot ID:                    839f53e7-47d6-4a79-b0c1-1ae38702abb2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-pgqjf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  default                     hello-node-connect-65d86f57f4-gf56q          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 coredns-6f6b679f8f-lzgm4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m32s
	  kube-system                 etcd-functional-174000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-apiserver-functional-174000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-controller-manager-functional-174000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-proxy-hsjzf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-scheduler-functional-174000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m31s                kube-proxy       
	  Normal  Starting                 83s                  kube-proxy       
	  Normal  Starting                 2m3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  2m37s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m37s                kubelet          Node functional-174000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m37s                kubelet          Node functional-174000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s                kubelet          Node functional-174000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m33s                node-controller  Node functional-174000 event: Registered Node functional-174000 in Controller
	  Normal  NodeReady                2m33s                kubelet          Node functional-174000 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     2m33s                cidrAllocator    Node functional-174000 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node functional-174000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node functional-174000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node functional-174000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m2s                 node-controller  Node functional-174000 event: Registered Node functional-174000 in Controller
	  Normal  Starting                 88s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  88s (x8 over 88s)    kubelet          Node functional-174000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 88s)    kubelet          Node functional-174000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 88s)    kubelet          Node functional-174000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           81s                  node-controller  Node functional-174000 event: Registered Node functional-174000 in Controller
	
	
	==> dmesg <==
	[  +3.406914] kauditd_printk_skb: 199 callbacks suppressed
	[ +11.160742] systemd-fstab-generator[4730]: Ignoring "noauto" option for root device
	[  +0.055234] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.480240] systemd-fstab-generator[5167]: Ignoring "noauto" option for root device
	[  +0.056720] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.103470] systemd-fstab-generator[5200]: Ignoring "noauto" option for root device
	[  +0.089505] systemd-fstab-generator[5212]: Ignoring "noauto" option for root device
	[  +0.101763] systemd-fstab-generator[5226]: Ignoring "noauto" option for root device
	[  +5.136442] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.320030] systemd-fstab-generator[5861]: Ignoring "noauto" option for root device
	[  +0.080932] systemd-fstab-generator[5873]: Ignoring "noauto" option for root device
	[  +0.072550] systemd-fstab-generator[5885]: Ignoring "noauto" option for root device
	[  +0.083225] systemd-fstab-generator[5900]: Ignoring "noauto" option for root device
	[  +0.233216] systemd-fstab-generator[6069]: Ignoring "noauto" option for root device
	[  +0.982518] systemd-fstab-generator[6193]: Ignoring "noauto" option for root device
	[  +4.420083] kauditd_printk_skb: 199 callbacks suppressed
	[Aug13 23:55] kauditd_printk_skb: 34 callbacks suppressed
	[ +19.808710] systemd-fstab-generator[7340]: Ignoring "noauto" option for root device
	[  +5.000082] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.511726] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.003853] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.176074] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.329997] kauditd_printk_skb: 17 callbacks suppressed
	[Aug13 23:56] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.141078] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [151508bb1cd9] <==
	{"level":"info","ts":"2024-08-13T23:54:49.782484Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-13T23:54:49.782539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-13T23:54:49.782568Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-13T23:54:49.783547Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-13T23:54:49.784158Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-13T23:54:49.784240Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-13T23:54:49.784263Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-13T23:54:49.784763Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-13T23:54:49.784791Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-13T23:54:51.587023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-13T23:54:51.587221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-13T23:54:51.587328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-13T23:54:51.587425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-13T23:54:51.587584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-13T23:54:51.587849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-13T23:54:51.587898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-13T23:54:51.592629Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-174000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-13T23:54:51.592954Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-13T23:54:51.593006Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-13T23:54:51.593367Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-13T23:54:51.593081Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-13T23:54:51.595300Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-13T23:54:51.595300Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-13T23:54:51.597342Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-13T23:54:51.599623Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [32a116b54ee5] <==
	{"level":"info","ts":"2024-08-13T23:54:11.022634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-13T23:54:11.022678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-13T23:54:11.022703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-13T23:54:11.022720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-13T23:54:11.022757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-13T23:54:11.022776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-13T23:54:11.023504Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-13T23:54:11.024154Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-13T23:54:11.024722Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-13T23:54:11.024839Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-13T23:54:11.025277Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-13T23:54:11.025819Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-13T23:54:11.023490Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-174000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-13T23:54:11.034028Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-13T23:54:11.034057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-13T23:54:34.825896Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-13T23:54:34.825932Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-174000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-13T23:54:34.825979Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-13T23:54:34.826021Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-13T23:54:34.836153Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-13T23:54:34.836393Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-13T23:54:34.838243Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-13T23:54:34.839530Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-13T23:54:34.839562Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-13T23:54:34.839566Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-174000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 23:56:16 up 2 min,  0 users,  load average: 0.32, 0.32, 0.14
	Linux functional-174000 5.10.207 #1 SMP PREEMPT Tue Aug 13 18:43:14 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8e1c1a709fc1] <==
	I0813 23:54:52.191796       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0813 23:54:52.192140       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0813 23:54:52.191805       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0813 23:54:52.210944       1 shared_informer.go:320] Caches are synced for configmaps
	I0813 23:54:52.210964       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 23:54:52.213619       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0813 23:54:52.233433       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0813 23:54:52.235645       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0813 23:54:52.235654       1 policy_source.go:224] refreshing policies
	I0813 23:54:52.236791       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0813 23:54:52.243236       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 23:54:53.093516       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0813 23:54:53.196945       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0813 23:54:53.197426       1 controller.go:615] quota admission added evaluator for: endpoints
	I0813 23:54:53.204811       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 23:54:53.434927       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0813 23:54:53.439070       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0813 23:54:53.451097       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0813 23:54:53.458575       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 23:54:53.460575       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 23:55:31.629075       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.42.119"}
	I0813 23:55:37.137346       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.128.98"}
	I0813 23:55:47.578105       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0813 23:55:47.619138       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.197.6"}
	I0813 23:56:00.918967       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.136.114"}
	
	
	==> kube-controller-manager [7c6a1bd99f21] <==
	I0813 23:54:55.692606       1 shared_informer.go:320] Caches are synced for resource quota
	I0813 23:54:55.739390       1 shared_informer.go:320] Caches are synced for endpoint
	I0813 23:54:55.739596       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0813 23:54:56.101312       1 shared_informer.go:320] Caches are synced for garbage collector
	I0813 23:54:56.191008       1 shared_informer.go:320] Caches are synced for garbage collector
	I0813 23:54:56.191054       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0813 23:55:26.052838       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="11.365682ms"
	I0813 23:55:26.053102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="45.083µs"
	I0813 23:55:47.586732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="7.333165ms"
	I0813 23:55:47.590884       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="3.615749ms"
	I0813 23:55:47.591267       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="20.708µs"
	I0813 23:55:47.594373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="10.375µs"
	I0813 23:55:52.645899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="245.417µs"
	I0813 23:55:53.231620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-174000"
	I0813 23:55:53.670839       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="30.208µs"
	I0813 23:55:54.688248       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="32.958µs"
	I0813 23:56:00.884985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="6.681242ms"
	I0813 23:56:00.889786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="4.775037ms"
	I0813 23:56:00.890210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="18.416µs"
	I0813 23:56:00.893498       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="21.708µs"
	I0813 23:56:01.828445       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="42.292µs"
	I0813 23:56:02.860449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="44µs"
	I0813 23:56:04.889045       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="25.084µs"
	I0813 23:56:14.750657       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="34.375µs"
	I0813 23:56:15.043300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="28.125µs"
	
	
	==> kube-controller-manager [8b09c53b010d] <==
	I0813 23:54:14.893600       1 shared_informer.go:320] Caches are synced for HPA
	I0813 23:54:14.894017       1 shared_informer.go:320] Caches are synced for TTL
	I0813 23:54:14.894740       1 shared_informer.go:320] Caches are synced for PV protection
	I0813 23:54:14.894779       1 shared_informer.go:320] Caches are synced for service account
	I0813 23:54:14.895858       1 shared_informer.go:320] Caches are synced for node
	I0813 23:54:14.895890       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0813 23:54:14.895901       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0813 23:54:14.895904       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0813 23:54:14.895910       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0813 23:54:14.895934       1 shared_informer.go:320] Caches are synced for crt configmap
	I0813 23:54:14.895935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-174000"
	I0813 23:54:14.943868       1 shared_informer.go:320] Caches are synced for disruption
	I0813 23:54:14.956181       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0813 23:54:14.956221       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0813 23:54:14.956237       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0813 23:54:14.956241       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0813 23:54:14.971015       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0813 23:54:15.050407       1 shared_informer.go:320] Caches are synced for resource quota
	I0813 23:54:15.066499       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0813 23:54:15.094093       1 shared_informer.go:320] Caches are synced for attach detach
	I0813 23:54:15.096519       1 shared_informer.go:320] Caches are synced for resource quota
	I0813 23:54:15.143357       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0813 23:54:15.507245       1 shared_informer.go:320] Caches are synced for garbage collector
	I0813 23:54:15.542727       1 shared_informer.go:320] Caches are synced for garbage collector
	I0813 23:54:15.542885       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [73876ac3fcdd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0813 23:54:53.302597       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0813 23:54:53.305875       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0813 23:54:53.305900       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0813 23:54:53.313137       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0813 23:54:53.313151       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0813 23:54:53.313203       1 server_linux.go:169] "Using iptables Proxier"
	I0813 23:54:53.313818       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0813 23:54:53.313930       1 server.go:483] "Version info" version="v1.31.0"
	I0813 23:54:53.313938       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0813 23:54:53.314380       1 config.go:197] "Starting service config controller"
	I0813 23:54:53.314393       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0813 23:54:53.314414       1 config.go:104] "Starting endpoint slice config controller"
	I0813 23:54:53.314420       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0813 23:54:53.314621       1 config.go:326] "Starting node config controller"
	I0813 23:54:53.314649       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0813 23:54:53.415495       1 shared_informer.go:320] Caches are synced for node config
	I0813 23:54:53.415541       1 shared_informer.go:320] Caches are synced for service config
	I0813 23:54:53.415559       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [81edd347dbfb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0813 23:54:12.913196       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0813 23:54:12.963106       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0813 23:54:12.963145       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0813 23:54:12.973441       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0813 23:54:12.973459       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0813 23:54:12.973471       1 server_linux.go:169] "Using iptables Proxier"
	I0813 23:54:12.975351       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0813 23:54:12.975454       1 server.go:483] "Version info" version="v1.31.0"
	I0813 23:54:12.975466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0813 23:54:12.976066       1 config.go:197] "Starting service config controller"
	I0813 23:54:12.976073       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0813 23:54:12.976081       1 config.go:104] "Starting endpoint slice config controller"
	I0813 23:54:12.976083       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0813 23:54:12.976206       1 config.go:326] "Starting node config controller"
	I0813 23:54:12.976208       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0813 23:54:13.076640       1 shared_informer.go:320] Caches are synced for service config
	I0813 23:54:13.076638       1 shared_informer.go:320] Caches are synced for node config
	I0813 23:54:13.076653       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c827a440123a] <==
	I0813 23:54:50.083281       1 serving.go:386] Generated self-signed cert in-memory
	W0813 23:54:52.152643       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 23:54:52.152702       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 23:54:52.152717       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 23:54:52.152727       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 23:54:52.157733       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0813 23:54:52.157744       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0813 23:54:52.158954       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0813 23:54:52.159310       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0813 23:54:52.159339       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 23:54:52.159376       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0813 23:54:52.260448       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fc5da04b3dd7] <==
	I0813 23:54:10.619427       1 serving.go:386] Generated self-signed cert in-memory
	W0813 23:54:11.536257       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 23:54:11.536276       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 23:54:11.536280       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 23:54:11.536283       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 23:54:11.560755       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0813 23:54:11.560853       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0813 23:54:11.562334       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0813 23:54:11.562372       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 23:54:11.562448       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0813 23:54:11.562480       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0813 23:54:11.663297       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 23:54:34.819406       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0813 23:54:34.819431       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0813 23:54:34.819505       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 13 23:56:01 functional-174000 kubelet[6200]: I0813 23:56:01.816321    6200 scope.go:117] "RemoveContainer" containerID="f2874a40a660a767ca2607d68d68871590822d1da035330af37cce19218b8607"
	Aug 13 23:56:02 functional-174000 kubelet[6200]: I0813 23:56:02.847398    6200 scope.go:117] "RemoveContainer" containerID="f2874a40a660a767ca2607d68d68871590822d1da035330af37cce19218b8607"
	Aug 13 23:56:02 functional-174000 kubelet[6200]: I0813 23:56:02.847729    6200 scope.go:117] "RemoveContainer" containerID="e3825c2e15978eabb7b15016601657101cd80f0952c717c4ce97963904821625"
	Aug 13 23:56:02 functional-174000 kubelet[6200]: E0813 23:56:02.847900    6200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-pgqjf_default(11c54ba2-25e6-47f1-8474-1d08a2be7ca1)\"" pod="default/hello-node-64b4f8f9ff-pgqjf" podUID="11c54ba2-25e6-47f1-8474-1d08a2be7ca1"
	Aug 13 23:56:04 functional-174000 kubelet[6200]: I0813 23:56:04.733940    6200 scope.go:117] "RemoveContainer" containerID="cbdf9eaa7b92385a80fb22cd0fda6d0e9e6f231c6e7eae2f1e3039fcb22efe3e"
	Aug 13 23:56:04 functional-174000 kubelet[6200]: I0813 23:56:04.884271    6200 scope.go:117] "RemoveContainer" containerID="cbdf9eaa7b92385a80fb22cd0fda6d0e9e6f231c6e7eae2f1e3039fcb22efe3e"
	Aug 13 23:56:04 functional-174000 kubelet[6200]: I0813 23:56:04.884387    6200 scope.go:117] "RemoveContainer" containerID="0d1dc5475e061dd4ab4bff730a1d7df58a3a95301acca6cc81c5e7c45814e6c3"
	Aug 13 23:56:04 functional-174000 kubelet[6200]: E0813 23:56:04.884453    6200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-gf56q_default(c62e2c3c-05a8-45ae-bd4c-48c40fb59433)\"" pod="default/hello-node-connect-65d86f57f4-gf56q" podUID="c62e2c3c-05a8-45ae-bd4c-48c40fb59433"
	Aug 13 23:56:09 functional-174000 kubelet[6200]: I0813 23:56:09.762778    6200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q6gv\" (UniqueName: \"kubernetes.io/projected/eb6c4afb-67fb-4432-997f-9b157e1e23eb-kube-api-access-2q6gv\") pod \"busybox-mount\" (UID: \"eb6c4afb-67fb-4432-997f-9b157e1e23eb\") " pod="default/busybox-mount"
	Aug 13 23:56:09 functional-174000 kubelet[6200]: I0813 23:56:09.762802    6200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/eb6c4afb-67fb-4432-997f-9b157e1e23eb-test-volume\") pod \"busybox-mount\" (UID: \"eb6c4afb-67fb-4432-997f-9b157e1e23eb\") " pod="default/busybox-mount"
	Aug 13 23:56:13 functional-174000 kubelet[6200]: I0813 23:56:13.193946    6200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q6gv\" (UniqueName: \"kubernetes.io/projected/eb6c4afb-67fb-4432-997f-9b157e1e23eb-kube-api-access-2q6gv\") pod \"eb6c4afb-67fb-4432-997f-9b157e1e23eb\" (UID: \"eb6c4afb-67fb-4432-997f-9b157e1e23eb\") "
	Aug 13 23:56:13 functional-174000 kubelet[6200]: I0813 23:56:13.193966    6200 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/eb6c4afb-67fb-4432-997f-9b157e1e23eb-test-volume\") pod \"eb6c4afb-67fb-4432-997f-9b157e1e23eb\" (UID: \"eb6c4afb-67fb-4432-997f-9b157e1e23eb\") "
	Aug 13 23:56:13 functional-174000 kubelet[6200]: I0813 23:56:13.193990    6200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6c4afb-67fb-4432-997f-9b157e1e23eb-test-volume" (OuterVolumeSpecName: "test-volume") pod "eb6c4afb-67fb-4432-997f-9b157e1e23eb" (UID: "eb6c4afb-67fb-4432-997f-9b157e1e23eb"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 13 23:56:13 functional-174000 kubelet[6200]: I0813 23:56:13.196678    6200 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6c4afb-67fb-4432-997f-9b157e1e23eb-kube-api-access-2q6gv" (OuterVolumeSpecName: "kube-api-access-2q6gv") pod "eb6c4afb-67fb-4432-997f-9b157e1e23eb" (UID: "eb6c4afb-67fb-4432-997f-9b157e1e23eb"). InnerVolumeSpecName "kube-api-access-2q6gv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 23:56:13 functional-174000 kubelet[6200]: I0813 23:56:13.294790    6200 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2q6gv\" (UniqueName: \"kubernetes.io/projected/eb6c4afb-67fb-4432-997f-9b157e1e23eb-kube-api-access-2q6gv\") on node \"functional-174000\" DevicePath \"\""
	Aug 13 23:56:13 functional-174000 kubelet[6200]: I0813 23:56:13.294809    6200 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/eb6c4afb-67fb-4432-997f-9b157e1e23eb-test-volume\") on node \"functional-174000\" DevicePath \"\""
	Aug 13 23:56:14 functional-174000 kubelet[6200]: I0813 23:56:14.027123    6200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f85fe9b384fcd44e6c0fd00acf54f5fd00f44b58fb8a92b13417a4f51aaecf0"
	Aug 13 23:56:14 functional-174000 kubelet[6200]: I0813 23:56:14.731886    6200 scope.go:117] "RemoveContainer" containerID="e3825c2e15978eabb7b15016601657101cd80f0952c717c4ce97963904821625"
	Aug 13 23:56:15 functional-174000 kubelet[6200]: I0813 23:56:15.035195    6200 scope.go:117] "RemoveContainer" containerID="e3825c2e15978eabb7b15016601657101cd80f0952c717c4ce97963904821625"
	Aug 13 23:56:15 functional-174000 kubelet[6200]: I0813 23:56:15.035376    6200 scope.go:117] "RemoveContainer" containerID="77085dc03432d21bb21efff889e92e9fee333b60dfb71d828bf5cb9427066a6b"
	Aug 13 23:56:15 functional-174000 kubelet[6200]: E0813 23:56:15.035443    6200 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-pgqjf_default(11c54ba2-25e6-47f1-8474-1d08a2be7ca1)\"" pod="default/hello-node-64b4f8f9ff-pgqjf" podUID="11c54ba2-25e6-47f1-8474-1d08a2be7ca1"
	Aug 13 23:56:16 functional-174000 kubelet[6200]: E0813 23:56:16.998348    6200 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb6c4afb-67fb-4432-997f-9b157e1e23eb" containerName="mount-munger"
	Aug 13 23:56:16 functional-174000 kubelet[6200]: I0813 23:56:16.998380    6200 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6c4afb-67fb-4432-997f-9b157e1e23eb" containerName="mount-munger"
	Aug 13 23:56:17 functional-174000 kubelet[6200]: I0813 23:56:17.019744    6200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhdq7\" (UniqueName: \"kubernetes.io/projected/2afe4726-4012-498c-ad86-ae9fa3f5cbd1-kube-api-access-lhdq7\") pod \"kubernetes-dashboard-695b96c756-mr69v\" (UID: \"2afe4726-4012-498c-ad86-ae9fa3f5cbd1\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-mr69v"
	Aug 13 23:56:17 functional-174000 kubelet[6200]: I0813 23:56:17.019771    6200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2afe4726-4012-498c-ad86-ae9fa3f5cbd1-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-mr69v\" (UID: \"2afe4726-4012-498c-ad86-ae9fa3f5cbd1\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-mr69v"
	
	
	==> storage-provisioner [407136c98365] <==
	I0813 23:55:06.847570       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 23:55:06.851501       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 23:55:06.851517       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 23:55:24.257767       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 23:55:24.258096       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-174000_8995c792-d4c0-4143-9229-fda686daa1c0!
	I0813 23:55:24.258386       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"785487df-a211-4ac1-beae-bc6ea9e988fb", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-174000_8995c792-d4c0-4143-9229-fda686daa1c0 became leader
	I0813 23:55:24.360973       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-174000_8995c792-d4c0-4143-9229-fda686daa1c0!
	I0813 23:55:42.024922       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0813 23:55:42.025024       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    35dcd565-0ffa-4b69-a787-a8556cb0fcc1 302 0 2024-08-13 23:53:44 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-13 23:53:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-17abe7a4-f564-478a-bc8e-feb883b0150f &PersistentVolumeClaim{ObjectMeta:{myclaim  default  17abe7a4-f564-478a-bc8e-feb883b0150f 665 0 2024-08-13 23:55:42 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-13 23:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-13 23:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0813 23:55:42.025587       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-17abe7a4-f564-478a-bc8e-feb883b0150f" provisioned
	I0813 23:55:42.025670       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0813 23:55:42.025711       1 volume_store.go:212] Trying to save persistentvolume "pvc-17abe7a4-f564-478a-bc8e-feb883b0150f"
	I0813 23:55:42.026311       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"17abe7a4-f564-478a-bc8e-feb883b0150f", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0813 23:55:42.030250       1 volume_store.go:219] persistentvolume "pvc-17abe7a4-f564-478a-bc8e-feb883b0150f" saved
	I0813 23:55:42.030481       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"17abe7a4-f564-478a-bc8e-feb883b0150f", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-17abe7a4-f564-478a-bc8e-feb883b0150f
	
	
	==> storage-provisioner [9f094c82472d] <==
	I0813 23:54:53.260592       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0813 23:54:53.264001       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-174000 -n functional-174000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-174000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-k4859 kubernetes-dashboard-695b96c756-mr69v
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-174000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-k4859 kubernetes-dashboard-695b96c756-mr69v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-174000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-k4859 kubernetes-dashboard-695b96c756-mr69v: exit status 1 (40.654583ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-174000/192.168.105.4
	Start Time:       Tue, 13 Aug 2024 16:56:09 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://e3f81a79ad1315f65bb90c399404ac327379a30e7c8e6189dc1d9cd01a8f67f0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 13 Aug 2024 16:56:11 -0700
	      Finished:     Tue, 13 Aug 2024 16:56:11 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2q6gv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2q6gv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/busybox-mount to functional-174000
	  Normal  Pulling    7s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.311s (1.311s including waiting). Image size: 3547125 bytes.
	  Normal  Created    6s    kubelet            Created container mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-k4859" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-mr69v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-174000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-k4859 kubernetes-dashboard-695b96c756-mr69v: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (29.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 node stop m02 -v=7 --alsologtostderr
E0813 17:00:41.950162    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:00:47.072504    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-699000 node stop m02 -v=7 --alsologtostderr: (12.193583625s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr
E0813 17:00:57.315997    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:01:17.799246    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:01:58.777761    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:03:20.705197    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:03:47.010449    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr: exit status 7 (2m55.979705875s)

                                                
                                                
-- stdout --
	ha-699000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-699000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-699000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-699000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:00:52.233181    3004 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:00:52.233401    3004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:00:52.233405    3004 out.go:304] Setting ErrFile to fd 2...
	I0813 17:00:52.233408    3004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:00:52.233552    3004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:00:52.233719    3004 out.go:298] Setting JSON to false
	I0813 17:00:52.233733    3004 mustload.go:65] Loading cluster: ha-699000
	I0813 17:00:52.233773    3004 notify.go:220] Checking for updates...
	I0813 17:00:52.234021    3004 config.go:182] Loaded profile config "ha-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:00:52.234027    3004 status.go:255] checking status of ha-699000 ...
	I0813 17:00:52.234837    3004 status.go:330] ha-699000 host status = "Running" (err=<nil>)
	I0813 17:00:52.234846    3004 host.go:66] Checking if "ha-699000" exists ...
	I0813 17:00:52.234974    3004 host.go:66] Checking if "ha-699000" exists ...
	I0813 17:00:52.235102    3004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 17:00:52.235110    3004 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/id_rsa Username:docker}
	W0813 17:01:18.154152    3004 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0813 17:01:18.154322    3004 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0813 17:01:18.154344    3004 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0813 17:01:18.154355    3004 status.go:257] ha-699000 status: &{Name:ha-699000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0813 17:01:18.154375    3004 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0813 17:01:18.154385    3004 status.go:255] checking status of ha-699000-m02 ...
	I0813 17:01:18.154845    3004 status.go:330] ha-699000-m02 host status = "Stopped" (err=<nil>)
	I0813 17:01:18.154857    3004 status.go:343] host is not running, skipping remaining checks
	I0813 17:01:18.154862    3004 status.go:257] ha-699000-m02 status: &{Name:ha-699000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 17:01:18.154889    3004 status.go:255] checking status of ha-699000-m03 ...
	I0813 17:01:18.156117    3004 status.go:330] ha-699000-m03 host status = "Running" (err=<nil>)
	I0813 17:01:18.156137    3004 host.go:66] Checking if "ha-699000-m03" exists ...
	I0813 17:01:18.156370    3004 host.go:66] Checking if "ha-699000-m03" exists ...
	I0813 17:01:18.156612    3004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 17:01:18.156627    3004 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m03/id_rsa Username:docker}
	W0813 17:02:33.178133    3004 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0813 17:02:33.178200    3004 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0813 17:02:33.178209    3004 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0813 17:02:33.178212    3004 status.go:257] ha-699000-m03 status: &{Name:ha-699000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0813 17:02:33.178222    3004 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0813 17:02:33.178226    3004 status.go:255] checking status of ha-699000-m04 ...
	I0813 17:02:33.178949    3004 status.go:330] ha-699000-m04 host status = "Running" (err=<nil>)
	I0813 17:02:33.178959    3004 host.go:66] Checking if "ha-699000-m04" exists ...
	I0813 17:02:33.179066    3004 host.go:66] Checking if "ha-699000-m04" exists ...
	I0813 17:02:33.179190    3004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 17:02:33.179196    3004 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m04/id_rsa Username:docker}
	W0813 17:03:48.179504    3004 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0813 17:03:48.179552    3004 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0813 17:03:48.179561    3004 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0813 17:03:48.179564    3004 status.go:257] ha-699000-m04 status: &{Name:ha-699000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0813 17:03:48.179589    3004 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr": ha-699000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-699000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-699000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr": ha-699000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-699000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-699000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr": ha-699000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-699000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-699000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000: exit status 3 (25.962137958s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 17:04:14.141616    3045 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0813 17:04:14.141624    3045 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-699000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.244985875s)
ha_test.go:413: expected profile "ha-699000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-699000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-699000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-699000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000
E0813 17:05:36.825716    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000: exit status 3 (25.960368209s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 17:05:58.343022    3057 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0813 17:05:58.343064    3057 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-699000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-699000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.10519575s)

                                                
                                                
-- stdout --
	* Starting "ha-699000-m02" control-plane node in "ha-699000" cluster
	* Restarting existing qemu2 VM for "ha-699000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-699000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:05:58.400801    3071 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:05:58.401275    3071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:05:58.401281    3071 out.go:304] Setting ErrFile to fd 2...
	I0813 17:05:58.401284    3071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:05:58.401490    3071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:05:58.401830    3071 mustload.go:65] Loading cluster: ha-699000
	I0813 17:05:58.402489    3071 config.go:182] Loaded profile config "ha-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0813 17:05:58.402795    3071 host.go:58] "ha-699000-m02" host status: Stopped
	I0813 17:05:58.407862    3071 out.go:177] * Starting "ha-699000-m02" control-plane node in "ha-699000" cluster
	I0813 17:05:58.411766    3071 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:05:58.411786    3071 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:05:58.411795    3071 cache.go:56] Caching tarball of preloaded images
	I0813 17:05:58.411914    3071 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:05:58.411920    3071 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:05:58.411996    3071 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/ha-699000/config.json ...
	I0813 17:05:58.412688    3071 start.go:360] acquireMachinesLock for ha-699000-m02: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:05:58.412743    3071 start.go:364] duration metric: took 37.334µs to acquireMachinesLock for "ha-699000-m02"
	I0813 17:05:58.412762    3071 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:05:58.412767    3071 fix.go:54] fixHost starting: m02
	I0813 17:05:58.412921    3071 fix.go:112] recreateIfNeeded on ha-699000-m02: state=Stopped err=<nil>
	W0813 17:05:58.412927    3071 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:05:58.415908    3071 out.go:177] * Restarting existing qemu2 VM for "ha-699000-m02" ...
	I0813 17:05:58.418835    3071 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:05:58.418904    3071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:99:b0:3b:a2:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/disk.qcow2
	I0813 17:05:58.421771    3071 main.go:141] libmachine: STDOUT: 
	I0813 17:05:58.421793    3071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:05:58.421821    3071 fix.go:56] duration metric: took 9.053291ms for fixHost
	I0813 17:05:58.421825    3071 start.go:83] releasing machines lock for "ha-699000-m02", held for 9.069958ms
	W0813 17:05:58.421833    3071 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:05:58.421869    3071 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:05:58.421874    3071 start.go:729] Will try again in 5 seconds ...
	I0813 17:06:03.423936    3071 start.go:360] acquireMachinesLock for ha-699000-m02: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:06:03.424066    3071 start.go:364] duration metric: took 102.125µs to acquireMachinesLock for "ha-699000-m02"
	I0813 17:06:03.424105    3071 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:06:03.424109    3071 fix.go:54] fixHost starting: m02
	I0813 17:06:03.424287    3071 fix.go:112] recreateIfNeeded on ha-699000-m02: state=Stopped err=<nil>
	W0813 17:06:03.424292    3071 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:06:03.428170    3071 out.go:177] * Restarting existing qemu2 VM for "ha-699000-m02" ...
	I0813 17:06:03.432158    3071 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:06:03.432216    3071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:99:b0:3b:a2:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/disk.qcow2
	I0813 17:06:03.434327    3071 main.go:141] libmachine: STDOUT: 
	I0813 17:06:03.434343    3071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:06:03.434361    3071 fix.go:56] duration metric: took 10.252625ms for fixHost
	I0813 17:06:03.434366    3071 start.go:83] releasing machines lock for "ha-699000-m02", held for 10.295459ms
	W0813 17:06:03.434420    3071 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:06:03.438057    3071 out.go:177] 
	W0813 17:06:03.442171    3071 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:06:03.442178    3071 out.go:239] * 
	* 
	W0813 17:06:03.443880    3071 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:06:03.448110    3071 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0813 17:05:58.400801    3071 out.go:291] Setting OutFile to fd 1 ...
I0813 17:05:58.401275    3071 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 17:05:58.401281    3071 out.go:304] Setting ErrFile to fd 2...
I0813 17:05:58.401284    3071 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 17:05:58.401490    3071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
I0813 17:05:58.401830    3071 mustload.go:65] Loading cluster: ha-699000
I0813 17:05:58.402489    3071 config.go:182] Loaded profile config "ha-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0813 17:05:58.402795    3071 host.go:58] "ha-699000-m02" host status: Stopped
I0813 17:05:58.407862    3071 out.go:177] * Starting "ha-699000-m02" control-plane node in "ha-699000" cluster
I0813 17:05:58.411766    3071 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0813 17:05:58.411786    3071 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0813 17:05:58.411795    3071 cache.go:56] Caching tarball of preloaded images
I0813 17:05:58.411914    3071 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0813 17:05:58.411920    3071 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0813 17:05:58.411996    3071 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/ha-699000/config.json ...
I0813 17:05:58.412688    3071 start.go:360] acquireMachinesLock for ha-699000-m02: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0813 17:05:58.412743    3071 start.go:364] duration metric: took 37.334µs to acquireMachinesLock for "ha-699000-m02"
I0813 17:05:58.412762    3071 start.go:96] Skipping create...Using existing machine configuration
I0813 17:05:58.412767    3071 fix.go:54] fixHost starting: m02
I0813 17:05:58.412921    3071 fix.go:112] recreateIfNeeded on ha-699000-m02: state=Stopped err=<nil>
W0813 17:05:58.412927    3071 fix.go:138] unexpected machine state, will restart: <nil>
I0813 17:05:58.415908    3071 out.go:177] * Restarting existing qemu2 VM for "ha-699000-m02" ...
I0813 17:05:58.418835    3071 qemu.go:418] Using hvf for hardware acceleration
I0813 17:05:58.418904    3071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:99:b0:3b:a2:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/disk.qcow2
I0813 17:05:58.421771    3071 main.go:141] libmachine: STDOUT: 
I0813 17:05:58.421793    3071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0813 17:05:58.421821    3071 fix.go:56] duration metric: took 9.053291ms for fixHost
I0813 17:05:58.421825    3071 start.go:83] releasing machines lock for "ha-699000-m02", held for 9.069958ms
W0813 17:05:58.421833    3071 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0813 17:05:58.421869    3071 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0813 17:05:58.421874    3071 start.go:729] Will try again in 5 seconds ...
I0813 17:06:03.423936    3071 start.go:360] acquireMachinesLock for ha-699000-m02: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0813 17:06:03.424066    3071 start.go:364] duration metric: took 102.125µs to acquireMachinesLock for "ha-699000-m02"
I0813 17:06:03.424105    3071 start.go:96] Skipping create...Using existing machine configuration
I0813 17:06:03.424109    3071 fix.go:54] fixHost starting: m02
I0813 17:06:03.424287    3071 fix.go:112] recreateIfNeeded on ha-699000-m02: state=Stopped err=<nil>
W0813 17:06:03.424292    3071 fix.go:138] unexpected machine state, will restart: <nil>
I0813 17:06:03.428170    3071 out.go:177] * Restarting existing qemu2 VM for "ha-699000-m02" ...
I0813 17:06:03.432158    3071 qemu.go:418] Using hvf for hardware acceleration
I0813 17:06:03.432216    3071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:99:b0:3b:a2:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m02/disk.qcow2
I0813 17:06:03.434327    3071 main.go:141] libmachine: STDOUT: 
I0813 17:06:03.434343    3071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0813 17:06:03.434361    3071 fix.go:56] duration metric: took 10.252625ms for fixHost
I0813 17:06:03.434366    3071 start.go:83] releasing machines lock for "ha-699000-m02", held for 10.295459ms
W0813 17:06:03.434420    3071 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0813 17:06:03.438057    3071 out.go:177] 
W0813 17:06:03.442171    3071 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0813 17:06:03.442178    3071 out.go:239] * 
* 
W0813 17:06:03.443880    3071 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0813 17:06:03.448110    3071 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-699000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr
E0813 17:06:04.546909    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:08:47.007537    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr: exit status 7 (2m57.650106042s)

                                                
                                                
-- stdout --
	ha-699000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-699000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-699000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-699000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:06:03.484763    3075 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:06:03.484930    3075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:06:03.484933    3075 out.go:304] Setting ErrFile to fd 2...
	I0813 17:06:03.484935    3075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:06:03.485071    3075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:06:03.485198    3075 out.go:298] Setting JSON to false
	I0813 17:06:03.485213    3075 mustload.go:65] Loading cluster: ha-699000
	I0813 17:06:03.485268    3075 notify.go:220] Checking for updates...
	I0813 17:06:03.485451    3075 config.go:182] Loaded profile config "ha-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:06:03.485457    3075 status.go:255] checking status of ha-699000 ...
	I0813 17:06:03.486129    3075 status.go:330] ha-699000 host status = "Running" (err=<nil>)
	I0813 17:06:03.486138    3075 host.go:66] Checking if "ha-699000" exists ...
	I0813 17:06:03.486225    3075 host.go:66] Checking if "ha-699000" exists ...
	I0813 17:06:03.486337    3075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 17:06:03.486345    3075 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/id_rsa Username:docker}
	W0813 17:06:03.486527    3075 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0813 17:06:03.486541    3075 retry.go:31] will retry after 200.564652ms: dial tcp 192.168.105.5:22: connect: host is down
	W0813 17:06:03.689284    3075 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0813 17:06:03.689310    3075 retry.go:31] will retry after 393.568413ms: dial tcp 192.168.105.5:22: connect: host is down
	W0813 17:06:04.085043    3075 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0813 17:06:04.085063    3075 retry.go:31] will retry after 640.673863ms: dial tcp 192.168.105.5:22: connect: host is down
	W0813 17:06:04.727341    3075 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0813 17:06:04.727368    3075 retry.go:31] will retry after 438.82355ms: dial tcp 192.168.105.5:22: connect: host is down
	W0813 17:06:31.091151    3075 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0813 17:06:31.091241    3075 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0813 17:06:31.091259    3075 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0813 17:06:31.091264    3075 status.go:257] ha-699000 status: &{Name:ha-699000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0813 17:06:31.091275    3075 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0813 17:06:31.091279    3075 status.go:255] checking status of ha-699000-m02 ...
	I0813 17:06:31.091526    3075 status.go:330] ha-699000-m02 host status = "Stopped" (err=<nil>)
	I0813 17:06:31.091535    3075 status.go:343] host is not running, skipping remaining checks
	I0813 17:06:31.091545    3075 status.go:257] ha-699000-m02 status: &{Name:ha-699000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 17:06:31.091552    3075 status.go:255] checking status of ha-699000-m03 ...
	I0813 17:06:31.092254    3075 status.go:330] ha-699000-m03 host status = "Running" (err=<nil>)
	I0813 17:06:31.092263    3075 host.go:66] Checking if "ha-699000-m03" exists ...
	I0813 17:06:31.092376    3075 host.go:66] Checking if "ha-699000-m03" exists ...
	I0813 17:06:31.092514    3075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 17:06:31.092526    3075 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m03/id_rsa Username:docker}
	W0813 17:07:46.093152    3075 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0813 17:07:46.093195    3075 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0813 17:07:46.093205    3075 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0813 17:07:46.093209    3075 status.go:257] ha-699000-m03 status: &{Name:ha-699000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0813 17:07:46.093217    3075 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0813 17:07:46.093224    3075 status.go:255] checking status of ha-699000-m04 ...
	I0813 17:07:46.093905    3075 status.go:330] ha-699000-m04 host status = "Running" (err=<nil>)
	I0813 17:07:46.093913    3075 host.go:66] Checking if "ha-699000-m04" exists ...
	I0813 17:07:46.094020    3075 host.go:66] Checking if "ha-699000-m04" exists ...
	I0813 17:07:46.094136    3075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 17:07:46.094142    3075 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000-m04/id_rsa Username:docker}
	W0813 17:09:01.095226    3075 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0813 17:09:01.095430    3075 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0813 17:09:01.095470    3075 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0813 17:09:01.095490    3075 status.go:257] ha-699000-m04 status: &{Name:ha-699000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0813 17:09:01.095534    3075 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000: exit status 3 (26.002757542s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 17:09:27.099326    3110 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0813 17:09:27.099365    3110 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-699000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (283.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-699000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-699000 -v=7 --alsologtostderr
E0813 17:13:47.003752    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-699000 -v=7 --alsologtostderr: (4m38.102064875s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-699000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-699000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226904042s)

                                                
                                                
-- stdout --
	* [ha-699000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-699000" primary control-plane node in "ha-699000" cluster
	* Restarting existing qemu2 VM for "ha-699000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-699000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:15:23.344102    3218 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:15:23.344279    3218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:15:23.344285    3218 out.go:304] Setting ErrFile to fd 2...
	I0813 17:15:23.344288    3218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:15:23.344489    3218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:15:23.345939    3218 out.go:298] Setting JSON to false
	I0813 17:15:23.367285    3218 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2687,"bootTime":1723591836,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:15:23.367361    3218 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:15:23.372898    3218 out.go:177] * [ha-699000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:15:23.379852    3218 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:15:23.379894    3218 notify.go:220] Checking for updates...
	I0813 17:15:23.386807    3218 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:15:23.389860    3218 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:15:23.392874    3218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:15:23.394132    3218 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:15:23.396820    3218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:15:23.400241    3218 config.go:182] Loaded profile config "ha-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:15:23.400293    3218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:15:23.404672    3218 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:15:23.411815    3218 start.go:297] selected driver: qemu2
	I0813 17:15:23.411821    3218 start.go:901] validating driver "qemu2" against &{Name:ha-699000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-699000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:15:23.411911    3218 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:15:23.414642    3218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:15:23.414699    3218 cni.go:84] Creating CNI manager for ""
	I0813 17:15:23.414704    3218 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0813 17:15:23.414754    3218 start.go:340] cluster config:
	{Name:ha-699000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-699000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:15:23.418719    3218 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:15:23.427828    3218 out.go:177] * Starting "ha-699000" primary control-plane node in "ha-699000" cluster
	I0813 17:15:23.431799    3218 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:15:23.431813    3218 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:15:23.431821    3218 cache.go:56] Caching tarball of preloaded images
	I0813 17:15:23.431876    3218 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:15:23.431887    3218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:15:23.431957    3218 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/ha-699000/config.json ...
	I0813 17:15:23.432374    3218 start.go:360] acquireMachinesLock for ha-699000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:15:23.432410    3218 start.go:364] duration metric: took 30.291µs to acquireMachinesLock for "ha-699000"
	I0813 17:15:23.432420    3218 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:15:23.432425    3218 fix.go:54] fixHost starting: 
	I0813 17:15:23.432546    3218 fix.go:112] recreateIfNeeded on ha-699000: state=Stopped err=<nil>
	W0813 17:15:23.432554    3218 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:15:23.436857    3218 out.go:177] * Restarting existing qemu2 VM for "ha-699000" ...
	I0813 17:15:23.444874    3218 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:15:23.444919    3218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:79:d0:64:c8:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/disk.qcow2
	I0813 17:15:23.447113    3218 main.go:141] libmachine: STDOUT: 
	I0813 17:15:23.447134    3218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:15:23.447166    3218 fix.go:56] duration metric: took 14.73975ms for fixHost
	I0813 17:15:23.447180    3218 start.go:83] releasing machines lock for "ha-699000", held for 14.756084ms
	W0813 17:15:23.447188    3218 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:15:23.447223    3218 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:15:23.447228    3218 start.go:729] Will try again in 5 seconds ...
	I0813 17:15:28.449316    3218 start.go:360] acquireMachinesLock for ha-699000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:15:28.449737    3218 start.go:364] duration metric: took 330.541µs to acquireMachinesLock for "ha-699000"
	I0813 17:15:28.449870    3218 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:15:28.449890    3218 fix.go:54] fixHost starting: 
	I0813 17:15:28.450538    3218 fix.go:112] recreateIfNeeded on ha-699000: state=Stopped err=<nil>
	W0813 17:15:28.450564    3218 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:15:28.454944    3218 out.go:177] * Restarting existing qemu2 VM for "ha-699000" ...
	I0813 17:15:28.460874    3218 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:15:28.461119    3218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:79:d0:64:c8:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/disk.qcow2
	I0813 17:15:28.469775    3218 main.go:141] libmachine: STDOUT: 
	I0813 17:15:28.469858    3218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:15:28.469943    3218 fix.go:56] duration metric: took 20.054792ms for fixHost
	I0813 17:15:28.469961    3218 start.go:83] releasing machines lock for "ha-699000", held for 20.201917ms
	W0813 17:15:28.470165    3218 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:15:28.477811    3218 out.go:177] 
	W0813 17:15:28.481969    3218 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:15:28.481994    3218 out.go:239] * 
	* 
	W0813 17:15:28.484326    3218 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:15:28.491935    3218 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-699000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-699000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000: exit status 7 (33.787458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (283.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-699000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.628542ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-699000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-699000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:15:28.634646    3231 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:15:28.634873    3231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:15:28.634876    3231 out.go:304] Setting ErrFile to fd 2...
	I0813 17:15:28.634879    3231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:15:28.635011    3231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:15:28.635224    3231 mustload.go:65] Loading cluster: ha-699000
	I0813 17:15:28.635450    3231 config.go:182] Loaded profile config "ha-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0813 17:15:28.635754    3231 out.go:239] ! The control-plane node ha-699000 host is not running (will try others): state=Stopped
	! The control-plane node ha-699000 host is not running (will try others): state=Stopped
	W0813 17:15:28.635864    3231 out.go:239] ! The control-plane node ha-699000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-699000-m02 host is not running (will try others): state=Stopped
	I0813 17:15:28.640712    3231 out.go:177] * The control-plane node ha-699000-m03 host is not running: state=Stopped
	I0813 17:15:28.643751    3231 out.go:177]   To start a cluster, run: "minikube start -p ha-699000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-699000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr: exit status 7 (30.7205ms)

                                                
                                                
-- stdout --
	ha-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-699000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-699000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-699000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:15:28.675666    3233 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:15:28.676044    3233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:15:28.676049    3233 out.go:304] Setting ErrFile to fd 2...
	I0813 17:15:28.676051    3233 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:15:28.676266    3233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:15:28.676423    3233 out.go:298] Setting JSON to false
	I0813 17:15:28.676442    3233 mustload.go:65] Loading cluster: ha-699000
	I0813 17:15:28.676521    3233 notify.go:220] Checking for updates...
	I0813 17:15:28.676928    3233 config.go:182] Loaded profile config "ha-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:15:28.676939    3233 status.go:255] checking status of ha-699000 ...
	I0813 17:15:28.677161    3233 status.go:330] ha-699000 host status = "Stopped" (err=<nil>)
	I0813 17:15:28.677166    3233 status.go:343] host is not running, skipping remaining checks
	I0813 17:15:28.677169    3233 status.go:257] ha-699000 status: &{Name:ha-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 17:15:28.677178    3233 status.go:255] checking status of ha-699000-m02 ...
	I0813 17:15:28.677272    3233 status.go:330] ha-699000-m02 host status = "Stopped" (err=<nil>)
	I0813 17:15:28.677274    3233 status.go:343] host is not running, skipping remaining checks
	I0813 17:15:28.677276    3233 status.go:257] ha-699000-m02 status: &{Name:ha-699000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 17:15:28.677280    3233 status.go:255] checking status of ha-699000-m03 ...
	I0813 17:15:28.677372    3233 status.go:330] ha-699000-m03 host status = "Stopped" (err=<nil>)
	I0813 17:15:28.677374    3233 status.go:343] host is not running, skipping remaining checks
	I0813 17:15:28.677376    3233 status.go:257] ha-699000-m03 status: &{Name:ha-699000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 17:15:28.677379    3233 status.go:255] checking status of ha-699000-m04 ...
	I0813 17:15:28.677479    3233 status.go:330] ha-699000-m04 host status = "Stopped" (err=<nil>)
	I0813 17:15:28.677482    3233 status.go:343] host is not running, skipping remaining checks
	I0813 17:15:28.677484    3233 status.go:257] ha-699000-m04 status: &{Name:ha-699000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000: exit status 7 (29.67975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-699000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-699000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-699000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-699000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000: exit status 7 (56.996709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (251.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 stop -v=7 --alsologtostderr
E0813 17:15:36.819579    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:16:59.893910    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:18:46.984724    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-699000 stop -v=7 --alsologtostderr: (4m11.063229125s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr: exit status 7 (66.323417ms)

                                                
                                                
-- stdout --
	ha-699000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-699000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-699000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-699000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:19:40.856570    3300 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:19:40.856784    3300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:19:40.856788    3300 out.go:304] Setting ErrFile to fd 2...
	I0813 17:19:40.856791    3300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:19:40.856952    3300 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:19:40.857120    3300 out.go:298] Setting JSON to false
	I0813 17:19:40.857136    3300 mustload.go:65] Loading cluster: ha-699000
	I0813 17:19:40.857167    3300 notify.go:220] Checking for updates...
	I0813 17:19:40.857442    3300 config.go:182] Loaded profile config "ha-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:19:40.857453    3300 status.go:255] checking status of ha-699000 ...
	I0813 17:19:40.857723    3300 status.go:330] ha-699000 host status = "Stopped" (err=<nil>)
	I0813 17:19:40.857729    3300 status.go:343] host is not running, skipping remaining checks
	I0813 17:19:40.857732    3300 status.go:257] ha-699000 status: &{Name:ha-699000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 17:19:40.857744    3300 status.go:255] checking status of ha-699000-m02 ...
	I0813 17:19:40.857864    3300 status.go:330] ha-699000-m02 host status = "Stopped" (err=<nil>)
	I0813 17:19:40.857867    3300 status.go:343] host is not running, skipping remaining checks
	I0813 17:19:40.857870    3300 status.go:257] ha-699000-m02 status: &{Name:ha-699000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 17:19:40.857875    3300 status.go:255] checking status of ha-699000-m03 ...
	I0813 17:19:40.857995    3300 status.go:330] ha-699000-m03 host status = "Stopped" (err=<nil>)
	I0813 17:19:40.857998    3300 status.go:343] host is not running, skipping remaining checks
	I0813 17:19:40.858001    3300 status.go:257] ha-699000-m03 status: &{Name:ha-699000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 17:19:40.858006    3300 status.go:255] checking status of ha-699000-m04 ...
	I0813 17:19:40.858122    3300 status.go:330] ha-699000-m04 host status = "Stopped" (err=<nil>)
	I0813 17:19:40.858125    3300 status.go:343] host is not running, skipping remaining checks
	I0813 17:19:40.858127    3300 status.go:257] ha-699000-m04 status: &{Name:ha-699000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr": ha-699000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr": ha-699000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr": ha-699000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-699000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000: exit status 7 (32.705875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (251.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-699000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-699000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.186632791s)

                                                
                                                
-- stdout --
	* [ha-699000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-699000" primary control-plane node in "ha-699000" cluster
	* Restarting existing qemu2 VM for "ha-699000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-699000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:19:40.919351    3304 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:19:40.919482    3304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:19:40.919485    3304 out.go:304] Setting ErrFile to fd 2...
	I0813 17:19:40.919489    3304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:19:40.919621    3304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:19:40.920640    3304 out.go:298] Setting JSON to false
	I0813 17:19:40.936767    3304 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2944,"bootTime":1723591836,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:19:40.936831    3304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:19:40.941271    3304 out.go:177] * [ha-699000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:19:40.949159    3304 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:19:40.949213    3304 notify.go:220] Checking for updates...
	I0813 17:19:40.957009    3304 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:19:40.961111    3304 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:19:40.965068    3304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:19:40.968089    3304 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:19:40.971038    3304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:19:40.974373    3304 config.go:182] Loaded profile config "ha-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:19:40.974633    3304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:19:40.977994    3304 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:19:40.985028    3304 start.go:297] selected driver: qemu2
	I0813 17:19:40.985036    3304 start.go:901] validating driver "qemu2" against &{Name:ha-699000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-699000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:19:40.985121    3304 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:19:40.987394    3304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:19:40.987440    3304 cni.go:84] Creating CNI manager for ""
	I0813 17:19:40.987445    3304 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0813 17:19:40.987502    3304 start.go:340] cluster config:
	{Name:ha-699000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-699000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:19:40.991069    3304 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:19:40.998934    3304 out.go:177] * Starting "ha-699000" primary control-plane node in "ha-699000" cluster
	I0813 17:19:41.003060    3304 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:19:41.003077    3304 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:19:41.003088    3304 cache.go:56] Caching tarball of preloaded images
	I0813 17:19:41.003153    3304 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:19:41.003161    3304 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:19:41.003241    3304 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/ha-699000/config.json ...
	I0813 17:19:41.003656    3304 start.go:360] acquireMachinesLock for ha-699000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:19:41.003691    3304 start.go:364] duration metric: took 28.916µs to acquireMachinesLock for "ha-699000"
	I0813 17:19:41.003700    3304 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:19:41.003707    3304 fix.go:54] fixHost starting: 
	I0813 17:19:41.003821    3304 fix.go:112] recreateIfNeeded on ha-699000: state=Stopped err=<nil>
	W0813 17:19:41.003829    3304 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:19:41.008043    3304 out.go:177] * Restarting existing qemu2 VM for "ha-699000" ...
	I0813 17:19:41.016091    3304 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:19:41.016140    3304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:79:d0:64:c8:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/disk.qcow2
	I0813 17:19:41.018107    3304 main.go:141] libmachine: STDOUT: 
	I0813 17:19:41.018126    3304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:19:41.018155    3304 fix.go:56] duration metric: took 14.448542ms for fixHost
	I0813 17:19:41.018160    3304 start.go:83] releasing machines lock for "ha-699000", held for 14.464833ms
	W0813 17:19:41.018164    3304 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:19:41.018208    3304 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:19:41.018213    3304 start.go:729] Will try again in 5 seconds ...
	I0813 17:19:46.020375    3304 start.go:360] acquireMachinesLock for ha-699000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:19:46.020853    3304 start.go:364] duration metric: took 376.333µs to acquireMachinesLock for "ha-699000"
	I0813 17:19:46.020995    3304 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:19:46.021017    3304 fix.go:54] fixHost starting: 
	I0813 17:19:46.021864    3304 fix.go:112] recreateIfNeeded on ha-699000: state=Stopped err=<nil>
	W0813 17:19:46.021890    3304 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:19:46.026392    3304 out.go:177] * Restarting existing qemu2 VM for "ha-699000" ...
	I0813 17:19:46.035289    3304 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:19:46.035548    3304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:79:d0:64:c8:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/ha-699000/disk.qcow2
	I0813 17:19:46.045326    3304 main.go:141] libmachine: STDOUT: 
	I0813 17:19:46.045404    3304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:19:46.045505    3304 fix.go:56] duration metric: took 24.492834ms for fixHost
	I0813 17:19:46.045528    3304 start.go:83] releasing machines lock for "ha-699000", held for 24.653875ms
	W0813 17:19:46.045752    3304 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-699000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:19:46.053282    3304 out.go:177] 
	W0813 17:19:46.057396    3304 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:19:46.057441    3304 out.go:239] * 
	* 
	W0813 17:19:46.059865    3304 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:19:46.071334    3304 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-699000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000: exit status 7 (68.880375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-699000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-699000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-699000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-699000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000: exit status 7 (30.498583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-699000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-699000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.466833ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-699000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-699000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:19:46.256672    3326 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:19:46.256828    3326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:19:46.256831    3326 out.go:304] Setting ErrFile to fd 2...
	I0813 17:19:46.256834    3326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:19:46.256952    3326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:19:46.257197    3326 mustload.go:65] Loading cluster: ha-699000
	I0813 17:19:46.257406    3326 config.go:182] Loaded profile config "ha-699000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0813 17:19:46.257732    3326 out.go:239] ! The control-plane node ha-699000 host is not running (will try others): state=Stopped
	! The control-plane node ha-699000 host is not running (will try others): state=Stopped
	W0813 17:19:46.257832    3326 out.go:239] ! The control-plane node ha-699000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-699000-m02 host is not running (will try others): state=Stopped
	I0813 17:19:46.261613    3326 out.go:177] * The control-plane node ha-699000-m03 host is not running: state=Stopped
	I0813 17:19:46.265632    3326 out.go:177]   To start a cluster, run: "minikube start -p ha-699000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-699000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-699000 -n ha-699000: exit status 7 (29.582083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-699000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-997000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-997000 --driver=qemu2 : exit status 80 (9.9937325s)

                                                
                                                
-- stdout --
	* [image-997000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-997000" primary control-plane node in "image-997000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-997000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-997000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-997000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-997000 -n image-997000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-997000 -n image-997000: exit status 7 (68.416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-997000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-657000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-657000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.768297584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7333b52c-ad8d-4825-8b7e-da985ce1f15d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-657000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3838f409-0417-43ed-9afd-f740d11a86d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19429"}}
	{"specversion":"1.0","id":"bf70e9e4-4ee3-4dc6-a636-ea7a1d0812bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig"}}
	{"specversion":"1.0","id":"b2779c35-5a8c-480e-bc91-c23cf95a3136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"985845e7-193e-4187-834b-926d8a84e3ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8f6c32d5-b53f-4011-8f01-8fedd0e105df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube"}}
	{"specversion":"1.0","id":"77e9500f-8a86-4350-b829-6724c1016eea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e3c9c063-d803-4485-87a2-8f566be41a28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"900d18d9-3c9b-46cb-82fc-b61e7c5b01a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"25834fa5-5885-4380-9ec1-429ddb824637","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-657000\" primary control-plane node in \"json-output-657000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"83afbae8-3b8c-4ed1-988e-dbe4cac785e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"fcb5d0c7-b95f-4f02-b4f1-f65fb061b228","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-657000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"16c8f0d8-878e-48ab-8208-d94fa1595914","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"9146f57c-93db-4c26-a24b-458eb42cc8fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"e7ad609a-1703-435d-8713-e84d8c107bcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-657000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"d80344e4-02e4-42bf-b736-450707c4098f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"1ddcc595-3448-4993-a81e-43008c4666d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-657000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-657000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-657000 --output=json --user=testUser: exit status 83 (78.245792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0c390c28-af02-4574-a993-3b038e38cc88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-657000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"1d46a2ff-0f6c-456e-8893-4403abc0c571","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-657000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-657000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-657000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-657000 --output=json --user=testUser: exit status 83 (47.125875ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-657000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-657000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-657000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-657000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-201000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-201000 --driver=qemu2 : exit status 80 (9.856892375s)

                                                
                                                
-- stdout --
	* [first-201000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-201000" primary control-plane node in "first-201000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-201000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-201000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-201000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-13 17:20:18.73752 -0700 PDT m=+2058.169464917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-203000 -n second-203000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-203000 -n second-203000: exit status 85 (80.505417ms)

                                                
                                                
-- stdout --
	* Profile "second-203000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-203000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-203000" host is not running, skipping log retrieval (state="* Profile \"second-203000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-203000\"")
helpers_test.go:175: Cleaning up "second-203000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-203000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-13 17:20:18.928075 -0700 PDT m=+2058.360023292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-201000 -n first-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-201000 -n first-201000: exit status 7 (30.894667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-201000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-201000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-201000
--- FAIL: TestMinikubeProfile (10.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-042000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-042000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.1832455s)

                                                
                                                
-- stdout --
	* [mount-start-1-042000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-042000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-042000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-042000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-042000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-042000 -n mount-start-1-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-042000 -n mount-start-1-042000: exit status 7 (68.057625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-980000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0813 17:20:36.799888    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-980000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.911123041s)

                                                
                                                
-- stdout --
	* [multinode-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-980000" primary control-plane node in "multinode-980000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-980000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:20:29.497547    3471 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:20:29.497673    3471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:20:29.497677    3471 out.go:304] Setting ErrFile to fd 2...
	I0813 17:20:29.497679    3471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:20:29.497805    3471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:20:29.498830    3471 out.go:298] Setting JSON to false
	I0813 17:20:29.514833    3471 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2993,"bootTime":1723591836,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:20:29.514910    3471 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:20:29.522123    3471 out.go:177] * [multinode-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:20:29.529044    3471 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:20:29.529112    3471 notify.go:220] Checking for updates...
	I0813 17:20:29.537088    3471 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:20:29.540068    3471 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:20:29.543100    3471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:20:29.546081    3471 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:20:29.547404    3471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:20:29.550290    3471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:20:29.554151    3471 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:20:29.559001    3471 start.go:297] selected driver: qemu2
	I0813 17:20:29.559006    3471 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:20:29.559012    3471 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:20:29.561300    3471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:20:29.565085    3471 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:20:29.566479    3471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:20:29.566510    3471 cni.go:84] Creating CNI manager for ""
	I0813 17:20:29.566515    3471 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0813 17:20:29.566519    3471 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 17:20:29.566541    3471 start.go:340] cluster config:
	{Name:multinode-980000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:20:29.570191    3471 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:20:29.577075    3471 out.go:177] * Starting "multinode-980000" primary control-plane node in "multinode-980000" cluster
	I0813 17:20:29.581078    3471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:20:29.581098    3471 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:20:29.581107    3471 cache.go:56] Caching tarball of preloaded images
	I0813 17:20:29.581171    3471 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:20:29.581183    3471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:20:29.581397    3471 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/multinode-980000/config.json ...
	I0813 17:20:29.581408    3471 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/multinode-980000/config.json: {Name:mkf030d5401f5e9b620f479b9602b564b9bc4a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:20:29.581602    3471 start.go:360] acquireMachinesLock for multinode-980000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:20:29.581633    3471 start.go:364] duration metric: took 26.083µs to acquireMachinesLock for "multinode-980000"
	I0813 17:20:29.581644    3471 start.go:93] Provisioning new machine with config: &{Name:multinode-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:20:29.581673    3471 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:20:29.589983    3471 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:20:29.606529    3471 start.go:159] libmachine.API.Create for "multinode-980000" (driver="qemu2")
	I0813 17:20:29.606551    3471 client.go:168] LocalClient.Create starting
	I0813 17:20:29.606618    3471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:20:29.606650    3471 main.go:141] libmachine: Decoding PEM data...
	I0813 17:20:29.606659    3471 main.go:141] libmachine: Parsing certificate...
	I0813 17:20:29.606691    3471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:20:29.606713    3471 main.go:141] libmachine: Decoding PEM data...
	I0813 17:20:29.606722    3471 main.go:141] libmachine: Parsing certificate...
	I0813 17:20:29.607204    3471 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:20:29.753320    3471 main.go:141] libmachine: Creating SSH key...
	I0813 17:20:29.878547    3471 main.go:141] libmachine: Creating Disk image...
	I0813 17:20:29.878556    3471 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:20:29.878747    3471 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2
	I0813 17:20:29.888160    3471 main.go:141] libmachine: STDOUT: 
	I0813 17:20:29.888178    3471 main.go:141] libmachine: STDERR: 
	I0813 17:20:29.888214    3471 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2 +20000M
	I0813 17:20:29.896092    3471 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:20:29.896109    3471 main.go:141] libmachine: STDERR: 
	I0813 17:20:29.896120    3471 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2
	I0813 17:20:29.896132    3471 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:20:29.896140    3471 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:20:29.896169    3471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:fd:f9:3c:dd:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2
	I0813 17:20:29.897809    3471 main.go:141] libmachine: STDOUT: 
	I0813 17:20:29.897823    3471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:20:29.897841    3471 client.go:171] duration metric: took 291.289458ms to LocalClient.Create
	I0813 17:20:31.900065    3471 start.go:128] duration metric: took 2.318340334s to createHost
	I0813 17:20:31.900125    3471 start.go:83] releasing machines lock for "multinode-980000", held for 2.318515666s
	W0813 17:20:31.900175    3471 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:20:31.914344    3471 out.go:177] * Deleting "multinode-980000" in qemu2 ...
	W0813 17:20:31.948915    3471 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:20:31.948945    3471 start.go:729] Will try again in 5 seconds ...
	I0813 17:20:36.951095    3471 start.go:360] acquireMachinesLock for multinode-980000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:20:36.951546    3471 start.go:364] duration metric: took 330.792µs to acquireMachinesLock for "multinode-980000"
	I0813 17:20:36.951653    3471 start.go:93] Provisioning new machine with config: &{Name:multinode-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:20:36.951979    3471 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:20:36.961537    3471 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:20:37.006435    3471 start.go:159] libmachine.API.Create for "multinode-980000" (driver="qemu2")
	I0813 17:20:37.006491    3471 client.go:168] LocalClient.Create starting
	I0813 17:20:37.006607    3471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:20:37.006664    3471 main.go:141] libmachine: Decoding PEM data...
	I0813 17:20:37.006677    3471 main.go:141] libmachine: Parsing certificate...
	I0813 17:20:37.006733    3471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:20:37.006773    3471 main.go:141] libmachine: Decoding PEM data...
	I0813 17:20:37.006786    3471 main.go:141] libmachine: Parsing certificate...
	I0813 17:20:37.007344    3471 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:20:37.160725    3471 main.go:141] libmachine: Creating SSH key...
	I0813 17:20:37.320182    3471 main.go:141] libmachine: Creating Disk image...
	I0813 17:20:37.320188    3471 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:20:37.320404    3471 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2
	I0813 17:20:37.329928    3471 main.go:141] libmachine: STDOUT: 
	I0813 17:20:37.329947    3471 main.go:141] libmachine: STDERR: 
	I0813 17:20:37.329989    3471 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2 +20000M
	I0813 17:20:37.337935    3471 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:20:37.337951    3471 main.go:141] libmachine: STDERR: 
	I0813 17:20:37.337962    3471 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2
	I0813 17:20:37.337965    3471 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:20:37.337978    3471 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:20:37.338003    3471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:31:ce:fc:f4:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2
	I0813 17:20:37.339634    3471 main.go:141] libmachine: STDOUT: 
	I0813 17:20:37.339653    3471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:20:37.339667    3471 client.go:171] duration metric: took 333.174667ms to LocalClient.Create
	I0813 17:20:39.341825    3471 start.go:128] duration metric: took 2.389855s to createHost
	I0813 17:20:39.341927    3471 start.go:83] releasing machines lock for "multinode-980000", held for 2.390359375s
	W0813 17:20:39.342238    3471 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:20:39.350835    3471 out.go:177] 
	W0813 17:20:39.353811    3471 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:20:39.353836    3471 out.go:239] * 
	* 
	W0813 17:20:39.356383    3471 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:20:39.365818    3471 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-980000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (66.521208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (116.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (132.628625ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-980000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- rollout status deployment/busybox: exit status 1 (57.840375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.404917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.781791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.218959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.13525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.946958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.736375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.709084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.325083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.346166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.66575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.827916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.346625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.821458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.003ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.610209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (29.885625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (116.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-980000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.613583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (30.327958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-980000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-980000 -v 3 --alsologtostderr: exit status 83 (43.582625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-980000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-980000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:22:36.414019    3566 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:22:36.414167    3566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:36.414171    3566 out.go:304] Setting ErrFile to fd 2...
	I0813 17:22:36.414173    3566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:36.414297    3566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:22:36.414527    3566 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:22:36.415106    3566 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:22:36.420375    3566 out.go:177] * The control-plane node multinode-980000 host is not running: state=Stopped
	I0813 17:22:36.424300    3566 out.go:177]   To start a cluster, run: "minikube start -p multinode-980000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-980000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (29.541875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-980000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-980000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.344ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-980000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-980000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-980000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (30.039791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-980000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-980000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-980000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-980000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (29.93925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status --output json --alsologtostderr: exit status 7 (29.763333ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-980000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:22:36.624562    3578 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:22:36.624694    3578 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:36.624697    3578 out.go:304] Setting ErrFile to fd 2...
	I0813 17:22:36.624699    3578 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:36.624842    3578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:22:36.624949    3578 out.go:298] Setting JSON to true
	I0813 17:22:36.624966    3578 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:22:36.625018    3578 notify.go:220] Checking for updates...
	I0813 17:22:36.625191    3578 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:22:36.625196    3578 status.go:255] checking status of multinode-980000 ...
	I0813 17:22:36.625401    3578 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:22:36.625406    3578 status.go:343] host is not running, skipping remaining checks
	I0813 17:22:36.625408    3578 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-980000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (29.895333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 node stop m03: exit status 85 (46.313458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-980000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status: exit status 7 (29.840625ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status --alsologtostderr: exit status 7 (29.7795ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:22:36.761210    3586 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:22:36.761360    3586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:36.761363    3586 out.go:304] Setting ErrFile to fd 2...
	I0813 17:22:36.761365    3586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:36.761490    3586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:22:36.761603    3586 out.go:298] Setting JSON to false
	I0813 17:22:36.761618    3586 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:22:36.761660    3586 notify.go:220] Checking for updates...
	I0813 17:22:36.761810    3586 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:22:36.761815    3586 status.go:255] checking status of multinode-980000 ...
	I0813 17:22:36.762022    3586 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:22:36.762028    3586 status.go:343] host is not running, skipping remaining checks
	I0813 17:22:36.762030    3586 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-980000 status --alsologtostderr": multinode-980000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (30.170958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.754959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:22:36.822253    3590 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:22:36.822510    3590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:36.822513    3590 out.go:304] Setting ErrFile to fd 2...
	I0813 17:22:36.822516    3590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:36.822644    3590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:22:36.822871    3590 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:22:36.823063    3590 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:22:36.827348    3590 out.go:177] 
	W0813 17:22:36.830291    3590 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0813 17:22:36.830295    3590 out.go:239] * 
	* 
	W0813 17:22:36.831963    3590 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:22:36.835239    3590 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0813 17:22:36.822253    3590 out.go:291] Setting OutFile to fd 1 ...
I0813 17:22:36.822510    3590 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 17:22:36.822513    3590 out.go:304] Setting ErrFile to fd 2...
I0813 17:22:36.822516    3590 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 17:22:36.822644    3590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
I0813 17:22:36.822871    3590 mustload.go:65] Loading cluster: multinode-980000
I0813 17:22:36.823063    3590 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 17:22:36.827348    3590 out.go:177] 
W0813 17:22:36.830291    3590 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0813 17:22:36.830295    3590 out.go:239] * 
* 
W0813 17:22:36.831963    3590 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0813 17:22:36.835239    3590 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-980000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr: exit status 7 (29.297ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:22:36.867917    3592 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:22:36.868044    3592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:36.868047    3592 out.go:304] Setting ErrFile to fd 2...
	I0813 17:22:36.868050    3592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:36.868168    3592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:22:36.868284    3592 out.go:298] Setting JSON to false
	I0813 17:22:36.868296    3592 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:22:36.868353    3592 notify.go:220] Checking for updates...
	I0813 17:22:36.868499    3592 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:22:36.868510    3592 status.go:255] checking status of multinode-980000 ...
	I0813 17:22:36.868732    3592 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:22:36.868737    3592 status.go:343] host is not running, skipping remaining checks
	I0813 17:22:36.868739    3592 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr: exit status 7 (73.046875ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:22:38.030208    3594 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:22:38.030438    3594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:38.030443    3594 out.go:304] Setting ErrFile to fd 2...
	I0813 17:22:38.030447    3594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:38.030683    3594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:22:38.030850    3594 out.go:298] Setting JSON to false
	I0813 17:22:38.030868    3594 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:22:38.030901    3594 notify.go:220] Checking for updates...
	I0813 17:22:38.031156    3594 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:22:38.031164    3594 status.go:255] checking status of multinode-980000 ...
	I0813 17:22:38.031452    3594 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:22:38.031458    3594 status.go:343] host is not running, skipping remaining checks
	I0813 17:22:38.031461    3594 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr: exit status 7 (75.075583ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:22:39.516680    3596 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:22:39.516903    3596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:39.516908    3596 out.go:304] Setting ErrFile to fd 2...
	I0813 17:22:39.516912    3596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:39.517126    3596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:22:39.517308    3596 out.go:298] Setting JSON to false
	I0813 17:22:39.517325    3596 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:22:39.517370    3596 notify.go:220] Checking for updates...
	I0813 17:22:39.517647    3596 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:22:39.517654    3596 status.go:255] checking status of multinode-980000 ...
	I0813 17:22:39.517958    3596 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:22:39.517965    3596 status.go:343] host is not running, skipping remaining checks
	I0813 17:22:39.517968    3596 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr: exit status 7 (73.660208ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:22:42.914243    3598 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:22:42.914481    3598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:42.914486    3598 out.go:304] Setting ErrFile to fd 2...
	I0813 17:22:42.914490    3598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:42.914682    3598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:22:42.914853    3598 out.go:298] Setting JSON to false
	I0813 17:22:42.914870    3598 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:22:42.914901    3598 notify.go:220] Checking for updates...
	I0813 17:22:42.915173    3598 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:22:42.915180    3598 status.go:255] checking status of multinode-980000 ...
	I0813 17:22:42.915496    3598 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:22:42.915502    3598 status.go:343] host is not running, skipping remaining checks
	I0813 17:22:42.915505    3598 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr: exit status 7 (74.028459ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:22:46.690964    3603 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:22:46.691180    3603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:46.691185    3603 out.go:304] Setting ErrFile to fd 2...
	I0813 17:22:46.691188    3603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:46.691411    3603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:22:46.691596    3603 out.go:298] Setting JSON to false
	I0813 17:22:46.691613    3603 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:22:46.691643    3603 notify.go:220] Checking for updates...
	I0813 17:22:46.691893    3603 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:22:46.691901    3603 status.go:255] checking status of multinode-980000 ...
	I0813 17:22:46.692212    3603 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:22:46.692219    3603 status.go:343] host is not running, skipping remaining checks
	I0813 17:22:46.692223    3603 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr: exit status 7 (71.652708ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:22:49.787872    3607 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:22:49.788081    3607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:49.788086    3607 out.go:304] Setting ErrFile to fd 2...
	I0813 17:22:49.788089    3607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:22:49.788273    3607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:22:49.788458    3607 out.go:298] Setting JSON to false
	I0813 17:22:49.788475    3607 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:22:49.788521    3607 notify.go:220] Checking for updates...
	I0813 17:22:49.788760    3607 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:22:49.788767    3607 status.go:255] checking status of multinode-980000 ...
	I0813 17:22:49.789048    3607 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:22:49.789054    3607 status.go:343] host is not running, skipping remaining checks
	I0813 17:22:49.789057    3607 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr: exit status 7 (74.077334ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:23:01.135724    3611 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:23:01.135916    3611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:01.135921    3611 out.go:304] Setting ErrFile to fd 2...
	I0813 17:23:01.135925    3611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:01.136098    3611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:23:01.136266    3611 out.go:298] Setting JSON to false
	I0813 17:23:01.136281    3611 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:23:01.136322    3611 notify.go:220] Checking for updates...
	I0813 17:23:01.136545    3611 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:23:01.136552    3611 status.go:255] checking status of multinode-980000 ...
	I0813 17:23:01.136846    3611 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:23:01.136852    3611 status.go:343] host is not running, skipping remaining checks
	I0813 17:23:01.136855    3611 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr: exit status 7 (73.144791ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:23:09.707892    3613 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:23:09.708076    3613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:09.708082    3613 out.go:304] Setting ErrFile to fd 2...
	I0813 17:23:09.708085    3613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:09.708275    3613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:23:09.708418    3613 out.go:298] Setting JSON to false
	I0813 17:23:09.708437    3613 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:23:09.708469    3613 notify.go:220] Checking for updates...
	I0813 17:23:09.708700    3613 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:23:09.708707    3613 status.go:255] checking status of multinode-980000 ...
	I0813 17:23:09.708984    3613 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:23:09.708991    3613 status.go:343] host is not running, skipping remaining checks
	I0813 17:23:09.708994    3613 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr: exit status 7 (71.741791ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:23:22.039701    3615 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:23:22.039927    3615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:22.039933    3615 out.go:304] Setting ErrFile to fd 2...
	I0813 17:23:22.039937    3615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:22.040119    3615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:23:22.040284    3615 out.go:298] Setting JSON to false
	I0813 17:23:22.040302    3615 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:23:22.040344    3615 notify.go:220] Checking for updates...
	I0813 17:23:22.040595    3615 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:23:22.040606    3615 status.go:255] checking status of multinode-980000 ...
	I0813 17:23:22.040902    3615 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:23:22.040909    3615 status.go:343] host is not running, skipping remaining checks
	I0813 17:23:22.040912    3615 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-980000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (34.434209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-980000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-980000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-980000: (3.4593625s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-980000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-980000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.20860025s)

                                                
                                                
-- stdout --
	* [multinode-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-980000" primary control-plane node in "multinode-980000" cluster
	* Restarting existing qemu2 VM for "multinode-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:23:25.623481    3640 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:23:25.624029    3640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:25.624084    3640 out.go:304] Setting ErrFile to fd 2...
	I0813 17:23:25.624096    3640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:25.624647    3640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:23:25.626215    3640 out.go:298] Setting JSON to false
	I0813 17:23:25.646413    3640 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3169,"bootTime":1723591836,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:23:25.646495    3640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:23:25.651215    3640 out.go:177] * [multinode-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:23:25.654239    3640 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:23:25.654310    3640 notify.go:220] Checking for updates...
	I0813 17:23:25.662166    3640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:23:25.665199    3640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:23:25.668212    3640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:23:25.671203    3640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:23:25.674181    3640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:23:25.677564    3640 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:23:25.677628    3640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:23:25.682079    3640 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:23:25.689108    3640 start.go:297] selected driver: qemu2
	I0813 17:23:25.689115    3640 start.go:901] validating driver "qemu2" against &{Name:multinode-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:23:25.689180    3640 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:23:25.691689    3640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:23:25.691746    3640 cni.go:84] Creating CNI manager for ""
	I0813 17:23:25.691752    3640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0813 17:23:25.691792    3640 start.go:340] cluster config:
	{Name:multinode-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-980000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:23:25.695550    3640 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:25.703119    3640 out.go:177] * Starting "multinode-980000" primary control-plane node in "multinode-980000" cluster
	I0813 17:23:25.707017    3640 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:23:25.707042    3640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:23:25.707050    3640 cache.go:56] Caching tarball of preloaded images
	I0813 17:23:25.707127    3640 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:23:25.707135    3640 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:23:25.707218    3640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/multinode-980000/config.json ...
	I0813 17:23:25.707646    3640 start.go:360] acquireMachinesLock for multinode-980000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:23:25.707686    3640 start.go:364] duration metric: took 33.375µs to acquireMachinesLock for "multinode-980000"
	I0813 17:23:25.707697    3640 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:23:25.707704    3640 fix.go:54] fixHost starting: 
	I0813 17:23:25.707837    3640 fix.go:112] recreateIfNeeded on multinode-980000: state=Stopped err=<nil>
	W0813 17:23:25.707850    3640 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:23:25.712198    3640 out.go:177] * Restarting existing qemu2 VM for "multinode-980000" ...
	I0813 17:23:25.720121    3640 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:23:25.720167    3640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:31:ce:fc:f4:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2
	I0813 17:23:25.722326    3640 main.go:141] libmachine: STDOUT: 
	I0813 17:23:25.722347    3640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:23:25.722377    3640 fix.go:56] duration metric: took 14.67225ms for fixHost
	I0813 17:23:25.722382    3640 start.go:83] releasing machines lock for "multinode-980000", held for 14.69175ms
	W0813 17:23:25.722388    3640 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:23:25.722419    3640 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:23:25.722424    3640 start.go:729] Will try again in 5 seconds ...
	I0813 17:23:30.722921    3640 start.go:360] acquireMachinesLock for multinode-980000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:23:30.723350    3640 start.go:364] duration metric: took 329.5µs to acquireMachinesLock for "multinode-980000"
	I0813 17:23:30.723517    3640 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:23:30.723537    3640 fix.go:54] fixHost starting: 
	I0813 17:23:30.724274    3640 fix.go:112] recreateIfNeeded on multinode-980000: state=Stopped err=<nil>
	W0813 17:23:30.724301    3640 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:23:30.728692    3640 out.go:177] * Restarting existing qemu2 VM for "multinode-980000" ...
	I0813 17:23:30.732627    3640 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:23:30.732822    3640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:31:ce:fc:f4:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2
	I0813 17:23:30.738444    3640 main.go:141] libmachine: STDOUT: 
	I0813 17:23:30.738490    3640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:23:30.738546    3640 fix.go:56] duration metric: took 15.013667ms for fixHost
	I0813 17:23:30.738560    3640 start.go:83] releasing machines lock for "multinode-980000", held for 15.188083ms
	W0813 17:23:30.738698    3640 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:23:30.746567    3640 out.go:177] 
	W0813 17:23:30.750668    3640 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:23:30.750689    3640 out.go:239] * 
	* 
	W0813 17:23:30.753038    3640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:23:30.762709    3640 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-980000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-980000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (33.606917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 node delete m03: exit status 83 (40.638209ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-980000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-980000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-980000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status --alsologtostderr: exit status 7 (29.696ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:23:30.946793    3655 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:23:30.946933    3655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:30.946936    3655 out.go:304] Setting ErrFile to fd 2...
	I0813 17:23:30.946938    3655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:30.947058    3655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:23:30.947176    3655 out.go:298] Setting JSON to false
	I0813 17:23:30.947188    3655 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:23:30.947249    3655 notify.go:220] Checking for updates...
	I0813 17:23:30.947383    3655 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:23:30.947388    3655 status.go:255] checking status of multinode-980000 ...
	I0813 17:23:30.947579    3655 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:23:30.947584    3655 status.go:343] host is not running, skipping remaining checks
	I0813 17:23:30.947586    3655 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-980000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (29.631375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-980000 stop: (3.216179416s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status: exit status 7 (63.545333ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-980000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-980000 status --alsologtostderr: exit status 7 (32.67375ms)

                                                
                                                
-- stdout --
	multinode-980000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:23:34.289382    3679 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:23:34.289507    3679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:34.289510    3679 out.go:304] Setting ErrFile to fd 2...
	I0813 17:23:34.289513    3679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:34.289652    3679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:23:34.289765    3679 out.go:298] Setting JSON to false
	I0813 17:23:34.289776    3679 mustload.go:65] Loading cluster: multinode-980000
	I0813 17:23:34.289837    3679 notify.go:220] Checking for updates...
	I0813 17:23:34.289984    3679 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:23:34.289989    3679 status.go:255] checking status of multinode-980000 ...
	I0813 17:23:34.290177    3679 status.go:330] multinode-980000 host status = "Stopped" (err=<nil>)
	I0813 17:23:34.290183    3679 status.go:343] host is not running, skipping remaining checks
	I0813 17:23:34.290185    3679 status.go:257] multinode-980000 status: &{Name:multinode-980000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-980000 status --alsologtostderr": multinode-980000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-980000 status --alsologtostderr": multinode-980000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (29.98425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-980000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-980000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180942417s)

                                                
                                                
-- stdout --
	* [multinode-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-980000" primary control-plane node in "multinode-980000" cluster
	* Restarting existing qemu2 VM for "multinode-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:23:34.349090    3683 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:23:34.349452    3683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:34.349456    3683 out.go:304] Setting ErrFile to fd 2...
	I0813 17:23:34.349459    3683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:34.349648    3683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:23:34.350972    3683 out.go:298] Setting JSON to false
	I0813 17:23:34.367211    3683 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3178,"bootTime":1723591836,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:23:34.367287    3683 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:23:34.371178    3683 out.go:177] * [multinode-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:23:34.379057    3683 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:23:34.379121    3683 notify.go:220] Checking for updates...
	I0813 17:23:34.384254    3683 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:23:34.387106    3683 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:23:34.390077    3683 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:23:34.393082    3683 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:23:34.396047    3683 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:23:34.399362    3683 config.go:182] Loaded profile config "multinode-980000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:23:34.399612    3683 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:23:34.404105    3683 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:23:34.411041    3683 start.go:297] selected driver: qemu2
	I0813 17:23:34.411049    3683 start.go:901] validating driver "qemu2" against &{Name:multinode-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:23:34.411115    3683 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:23:34.413252    3683 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:23:34.413292    3683 cni.go:84] Creating CNI manager for ""
	I0813 17:23:34.413297    3683 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0813 17:23:34.413335    3683 start.go:340] cluster config:
	{Name:multinode-980000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-980000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:23:34.416598    3683 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:34.423994    3683 out.go:177] * Starting "multinode-980000" primary control-plane node in "multinode-980000" cluster
	I0813 17:23:34.428070    3683 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:23:34.428084    3683 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:23:34.428091    3683 cache.go:56] Caching tarball of preloaded images
	I0813 17:23:34.428141    3683 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:23:34.428146    3683 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:23:34.428196    3683 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/multinode-980000/config.json ...
	I0813 17:23:34.428515    3683 start.go:360] acquireMachinesLock for multinode-980000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:23:34.428542    3683 start.go:364] duration metric: took 21µs to acquireMachinesLock for "multinode-980000"
	I0813 17:23:34.428550    3683 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:23:34.428556    3683 fix.go:54] fixHost starting: 
	I0813 17:23:34.428667    3683 fix.go:112] recreateIfNeeded on multinode-980000: state=Stopped err=<nil>
	W0813 17:23:34.428674    3683 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:23:34.437010    3683 out.go:177] * Restarting existing qemu2 VM for "multinode-980000" ...
	I0813 17:23:34.440029    3683 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:23:34.440063    3683 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:31:ce:fc:f4:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2
	I0813 17:23:34.442116    3683 main.go:141] libmachine: STDOUT: 
	I0813 17:23:34.442132    3683 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:23:34.442157    3683 fix.go:56] duration metric: took 13.599458ms for fixHost
	I0813 17:23:34.442161    3683 start.go:83] releasing machines lock for "multinode-980000", held for 13.616167ms
	W0813 17:23:34.442167    3683 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:23:34.442196    3683 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:23:34.442201    3683 start.go:729] Will try again in 5 seconds ...
	I0813 17:23:39.444321    3683 start.go:360] acquireMachinesLock for multinode-980000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:23:39.444775    3683 start.go:364] duration metric: took 353.584µs to acquireMachinesLock for "multinode-980000"
	I0813 17:23:39.444918    3683 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:23:39.444937    3683 fix.go:54] fixHost starting: 
	I0813 17:23:39.445663    3683 fix.go:112] recreateIfNeeded on multinode-980000: state=Stopped err=<nil>
	W0813 17:23:39.445690    3683 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:23:39.453871    3683 out.go:177] * Restarting existing qemu2 VM for "multinode-980000" ...
	I0813 17:23:39.458055    3683 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:23:39.458289    3683 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:31:ce:fc:f4:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/multinode-980000/disk.qcow2
	I0813 17:23:39.467841    3683 main.go:141] libmachine: STDOUT: 
	I0813 17:23:39.467906    3683 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:23:39.467984    3683 fix.go:56] duration metric: took 23.048417ms for fixHost
	I0813 17:23:39.468002    3683 start.go:83] releasing machines lock for "multinode-980000", held for 23.205042ms
	W0813 17:23:39.468144    3683 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:23:39.475069    3683 out.go:177] 
	W0813 17:23:39.479146    3683 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:23:39.479169    3683 out.go:239] * 
	* 
	W0813 17:23:39.481939    3683 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:23:39.489050    3683 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-980000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (68.250167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-980000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-980000-m01 --driver=qemu2 
E0813 17:23:46.980439    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-980000-m01 --driver=qemu2 : exit status 80 (9.842064708s)

                                                
                                                
-- stdout --
	* [multinode-980000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-980000-m01" primary control-plane node in "multinode-980000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-980000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-980000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-980000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-980000-m02 --driver=qemu2 : exit status 80 (10.044097875s)

                                                
                                                
-- stdout --
	* [multinode-980000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-980000-m02" primary control-plane node in "multinode-980000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-980000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-980000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-980000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-980000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-980000: exit status 83 (81.916833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-980000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-980000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-980000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-980000 -n multinode-980000: exit status 7 (30.998333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.12s)

                                                
                                    
x
+
TestPreload (9.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-080000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-080000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.806608459s)

                                                
                                                
-- stdout --
	* [test-preload-080000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-080000" primary control-plane node in "test-preload-080000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-080000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:23:59.825024    3746 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:23:59.825157    3746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:59.825160    3746 out.go:304] Setting ErrFile to fd 2...
	I0813 17:23:59.825163    3746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:23:59.825288    3746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:23:59.826363    3746 out.go:298] Setting JSON to false
	I0813 17:23:59.842367    3746 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3203,"bootTime":1723591836,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:23:59.842448    3746 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:23:59.849253    3746 out.go:177] * [test-preload-080000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:23:59.857192    3746 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:23:59.857234    3746 notify.go:220] Checking for updates...
	I0813 17:23:59.865245    3746 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:23:59.868180    3746 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:23:59.871183    3746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:23:59.874109    3746 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:23:59.877193    3746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:23:59.880595    3746 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:23:59.880648    3746 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:23:59.884173    3746 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:23:59.891251    3746 start.go:297] selected driver: qemu2
	I0813 17:23:59.891259    3746 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:23:59.891266    3746 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:23:59.893667    3746 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:23:59.895114    3746 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:23:59.898299    3746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:23:59.898333    3746 cni.go:84] Creating CNI manager for ""
	I0813 17:23:59.898339    3746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:23:59.898343    3746 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:23:59.898375    3746 start.go:340] cluster config:
	{Name:test-preload-080000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:23:59.902075    3746 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:59.909111    3746 out.go:177] * Starting "test-preload-080000" primary control-plane node in "test-preload-080000" cluster
	I0813 17:23:59.913199    3746 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0813 17:23:59.913279    3746 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/test-preload-080000/config.json ...
	I0813 17:23:59.913296    3746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/test-preload-080000/config.json: {Name:mkfeaf670e7155ce2b9456d67680c6559dcc6121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:23:59.913300    3746 cache.go:107] acquiring lock: {Name:mke14a3dc3194db543c276212c81745047c71d9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:59.913320    3746 cache.go:107] acquiring lock: {Name:mkedb9a7e0e0f98634ef37f8f13c1d2c3ea131bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:59.913360    3746 cache.go:107] acquiring lock: {Name:mk0fdacdf98253d6cce9a57c71af7fd2c7d62ad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:59.913298    3746 cache.go:107] acquiring lock: {Name:mkb6ac0aed59521455f3a3b20d88c720ba9be35c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:59.913464    3746 cache.go:107] acquiring lock: {Name:mk06bf3664c3233c4dcc2a65c1c9b0bf79503003 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:59.913531    3746 cache.go:107] acquiring lock: {Name:mk405cef76b580fb21bde37537ad4ffa844ae505 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:59.913553    3746 cache.go:107] acquiring lock: {Name:mka70d14447873d6bf21438d0d21cbaffc12a511 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:59.913304    3746 cache.go:107] acquiring lock: {Name:mke19177f6f0622759d8a41e0caebe49a1d61a4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:23:59.913718    3746 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0813 17:23:59.913739    3746 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0813 17:23:59.913758    3746 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0813 17:23:59.913800    3746 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0813 17:23:59.913799    3746 start.go:360] acquireMachinesLock for test-preload-080000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:23:59.913739    3746 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:23:59.913854    3746 start.go:364] duration metric: took 40.75µs to acquireMachinesLock for "test-preload-080000"
	I0813 17:23:59.913872    3746 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:23:59.913910    3746 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0813 17:23:59.913871    3746 start.go:93] Provisioning new machine with config: &{Name:test-preload-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:23:59.913928    3746 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:23:59.914038    3746 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:23:59.918130    3746 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:23:59.921315    3746 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0813 17:23:59.922655    3746 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0813 17:23:59.922638    3746 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0813 17:23:59.922718    3746 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:23:59.922734    3746 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:23:59.922749    3746 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:23:59.922779    3746 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0813 17:23:59.922814    3746 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0813 17:23:59.936419    3746 start.go:159] libmachine.API.Create for "test-preload-080000" (driver="qemu2")
	I0813 17:23:59.936464    3746 client.go:168] LocalClient.Create starting
	I0813 17:23:59.936550    3746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:23:59.936586    3746 main.go:141] libmachine: Decoding PEM data...
	I0813 17:23:59.936595    3746 main.go:141] libmachine: Parsing certificate...
	I0813 17:23:59.936650    3746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:23:59.936674    3746 main.go:141] libmachine: Decoding PEM data...
	I0813 17:23:59.936680    3746 main.go:141] libmachine: Parsing certificate...
	I0813 17:23:59.937041    3746 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:24:00.084873    3746 main.go:141] libmachine: Creating SSH key...
	I0813 17:24:00.173493    3746 main.go:141] libmachine: Creating Disk image...
	I0813 17:24:00.173517    3746 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:24:00.173713    3746 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2
	I0813 17:24:00.183526    3746 main.go:141] libmachine: STDOUT: 
	I0813 17:24:00.183546    3746 main.go:141] libmachine: STDERR: 
	I0813 17:24:00.183601    3746 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2 +20000M
	I0813 17:24:00.192543    3746 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:24:00.192561    3746 main.go:141] libmachine: STDERR: 
	I0813 17:24:00.192579    3746 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2
	I0813 17:24:00.192584    3746 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:24:00.192594    3746 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:24:00.192624    3746 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:90:3d:a1:d7:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2
	I0813 17:24:00.194353    3746 main.go:141] libmachine: STDOUT: 
	I0813 17:24:00.194374    3746 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:24:00.194400    3746 client.go:171] duration metric: took 257.933ms to LocalClient.Create
	I0813 17:24:00.500297    3746 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0813 17:24:00.503232    3746 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0813 17:24:00.512131    3746 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0813 17:24:00.512172    3746 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0813 17:24:00.518221    3746 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0813 17:24:00.519807    3746 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0813 17:24:00.538167    3746 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0813 17:24:00.548279    3746 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0813 17:24:00.711059    3746 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0813 17:24:00.711110    3746 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 797.777375ms
	I0813 17:24:00.711154    3746 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0813 17:24:00.795343    3746 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0813 17:24:00.795447    3746 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 17:24:01.031000    3746 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 17:24:01.031046    3746 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.11775925s
	I0813 17:24:01.031074    3746 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 17:24:02.194760    3746 start.go:128] duration metric: took 2.280833584s to createHost
	I0813 17:24:02.194823    3746 start.go:83] releasing machines lock for "test-preload-080000", held for 2.280989833s
	W0813 17:24:02.194893    3746 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:02.212020    3746 out.go:177] * Deleting "test-preload-080000" in qemu2 ...
	W0813 17:24:02.249053    3746 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:02.249075    3746 start.go:729] Will try again in 5 seconds ...
	I0813 17:24:02.463698    3746 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0813 17:24:02.463750    3746 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.550318625s
	I0813 17:24:02.463777    3746 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0813 17:24:02.659219    3746 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0813 17:24:02.659293    3746 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.746030083s
	I0813 17:24:02.659323    3746 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0813 17:24:04.483988    3746 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0813 17:24:04.484035    3746 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.570775875s
	I0813 17:24:04.484064    3746 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0813 17:24:05.434463    3746 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0813 17:24:05.434510    3746 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.521291875s
	I0813 17:24:05.434538    3746 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0813 17:24:06.737682    3746 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0813 17:24:06.737731    3746 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.824328084s
	I0813 17:24:06.737787    3746 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0813 17:24:07.249135    3746 start.go:360] acquireMachinesLock for test-preload-080000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:24:07.249551    3746 start.go:364] duration metric: took 338.875µs to acquireMachinesLock for "test-preload-080000"
	I0813 17:24:07.249670    3746 start.go:93] Provisioning new machine with config: &{Name:test-preload-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:24:07.249937    3746 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:24:07.262335    3746 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:24:07.312544    3746 start.go:159] libmachine.API.Create for "test-preload-080000" (driver="qemu2")
	I0813 17:24:07.312583    3746 client.go:168] LocalClient.Create starting
	I0813 17:24:07.312696    3746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:24:07.312764    3746 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:07.312792    3746 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:07.312863    3746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:24:07.312908    3746 main.go:141] libmachine: Decoding PEM data...
	I0813 17:24:07.312933    3746 main.go:141] libmachine: Parsing certificate...
	I0813 17:24:07.313435    3746 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:24:07.473717    3746 main.go:141] libmachine: Creating SSH key...
	I0813 17:24:07.542215    3746 main.go:141] libmachine: Creating Disk image...
	I0813 17:24:07.542221    3746 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:24:07.542442    3746 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2
	I0813 17:24:07.551894    3746 main.go:141] libmachine: STDOUT: 
	I0813 17:24:07.551921    3746 main.go:141] libmachine: STDERR: 
	I0813 17:24:07.551973    3746 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2 +20000M
	I0813 17:24:07.560078    3746 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:24:07.560092    3746 main.go:141] libmachine: STDERR: 
	I0813 17:24:07.560105    3746 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2
	I0813 17:24:07.560108    3746 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:24:07.560120    3746 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:24:07.560157    3746 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:da:8d:6a:e6:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/test-preload-080000/disk.qcow2
	I0813 17:24:07.561883    3746 main.go:141] libmachine: STDOUT: 
	I0813 17:24:07.561907    3746 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:24:07.561921    3746 client.go:171] duration metric: took 249.337416ms to LocalClient.Create
	I0813 17:24:09.562696    3746 start.go:128] duration metric: took 2.31275925s to createHost
	I0813 17:24:09.562738    3746 start.go:83] releasing machines lock for "test-preload-080000", held for 2.313194875s
	W0813 17:24:09.563031    3746 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-080000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-080000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:24:09.571576    3746 out.go:177] 
	W0813 17:24:09.575569    3746 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:24:09.575625    3746 out.go:239] * 
	* 
	W0813 17:24:09.577901    3746 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:24:09.587606    3746 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-080000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-13 17:24:09.605222 -0700 PDT m=+2289.040513542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-080000 -n test-preload-080000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-080000 -n test-preload-080000: exit status 7 (67.571791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-080000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-080000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-080000
--- FAIL: TestPreload (9.96s)

                                                
                                    
x
+
TestScheduledStopUnix (9.97s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-901000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-901000 --memory=2048 --driver=qemu2 : exit status 80 (9.816946291s)

                                                
                                                
-- stdout --
	* [scheduled-stop-901000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-901000" primary control-plane node in "scheduled-stop-901000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-901000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-901000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-901000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-901000" primary control-plane node in "scheduled-stop-901000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-901000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-901000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-13 17:24:19.572375 -0700 PDT m=+2299.007810501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-901000 -n scheduled-stop-901000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-901000 -n scheduled-stop-901000: exit status 7 (70.821292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-901000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-901000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-901000
--- FAIL: TestScheduledStopUnix (9.97s)

                                                
                                    
x
+
TestSkaffold (12.66s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1406622205 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1406622205 version: (1.063335541s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-087000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-087000 --memory=2600 --driver=qemu2 : exit status 80 (9.867266625s)

                                                
                                                
-- stdout --
	* [skaffold-087000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-087000" primary control-plane node in "skaffold-087000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-087000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-087000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-087000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-087000" primary control-plane node in "skaffold-087000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-087000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-087000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-13 17:24:32.238721 -0700 PDT m=+2311.674340001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-087000 -n skaffold-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-087000 -n skaffold-087000: exit status 7 (61.747208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-087000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-087000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-087000
--- FAIL: TestSkaffold (12.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (603.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3776466476 start -p running-upgrade-126000 --memory=2200 --vm-driver=qemu2 
E0813 17:25:36.793340    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3776466476 start -p running-upgrade-126000 --memory=2200 --vm-driver=qemu2 : (1m6.711727792s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-126000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0813 17:26:50.079120    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-126000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.350849209s)

                                                
                                                
-- stdout --
	* [running-upgrade-126000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-126000" primary control-plane node in "running-upgrade-126000" cluster
	* Updating the running qemu2 "running-upgrade-126000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:26:21.196185    4162 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:26:21.196319    4162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:26:21.196324    4162 out.go:304] Setting ErrFile to fd 2...
	I0813 17:26:21.196327    4162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:26:21.196462    4162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:26:21.197603    4162 out.go:298] Setting JSON to false
	I0813 17:26:21.214283    4162 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3345,"bootTime":1723591836,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:26:21.214355    4162 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:26:21.218352    4162 out.go:177] * [running-upgrade-126000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:26:21.225271    4162 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:26:21.225321    4162 notify.go:220] Checking for updates...
	I0813 17:26:21.232158    4162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:26:21.236251    4162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:26:21.239315    4162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:26:21.240569    4162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:26:21.243321    4162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:26:21.246622    4162 config.go:182] Loaded profile config "running-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:26:21.250288    4162 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0813 17:26:21.253251    4162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:26:21.257255    4162 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:26:21.262262    4162 start.go:297] selected driver: qemu2
	I0813 17:26:21.262268    4162 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0813 17:26:21.262316    4162 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:26:21.264547    4162 cni.go:84] Creating CNI manager for ""
	I0813 17:26:21.264565    4162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:26:21.264590    4162 start.go:340] cluster config:
	{Name:running-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0813 17:26:21.264638    4162 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:26:21.272457    4162 out.go:177] * Starting "running-upgrade-126000" primary control-plane node in "running-upgrade-126000" cluster
	I0813 17:26:21.276215    4162 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0813 17:26:21.276229    4162 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0813 17:26:21.276235    4162 cache.go:56] Caching tarball of preloaded images
	I0813 17:26:21.276284    4162 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:26:21.276289    4162 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0813 17:26:21.276338    4162 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/config.json ...
	I0813 17:26:21.276760    4162 start.go:360] acquireMachinesLock for running-upgrade-126000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:26:21.276791    4162 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "running-upgrade-126000"
	I0813 17:26:21.276800    4162 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:26:21.276804    4162 fix.go:54] fixHost starting: 
	I0813 17:26:21.277475    4162 fix.go:112] recreateIfNeeded on running-upgrade-126000: state=Running err=<nil>
	W0813 17:26:21.277484    4162 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:26:21.281233    4162 out.go:177] * Updating the running qemu2 "running-upgrade-126000" VM ...
	I0813 17:26:21.289258    4162 machine.go:94] provisionDockerMachine start ...
	I0813 17:26:21.289289    4162 main.go:141] libmachine: Using SSH client type: native
	I0813 17:26:21.289397    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030145a0] 0x103016e00 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0813 17:26:21.289402    4162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0813 17:26:21.355875    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-126000
	
	I0813 17:26:21.355894    4162 buildroot.go:166] provisioning hostname "running-upgrade-126000"
	I0813 17:26:21.355946    4162 main.go:141] libmachine: Using SSH client type: native
	I0813 17:26:21.356062    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030145a0] 0x103016e00 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0813 17:26:21.356070    4162 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-126000 && echo "running-upgrade-126000" | sudo tee /etc/hostname
	I0813 17:26:21.424576    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-126000
	
	I0813 17:26:21.424616    4162 main.go:141] libmachine: Using SSH client type: native
	I0813 17:26:21.424726    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030145a0] 0x103016e00 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0813 17:26:21.424733    4162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-126000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-126000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-126000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 17:26:21.488454    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0813 17:26:21.488465    4162 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19429-1127/.minikube CaCertPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19429-1127/.minikube}
	I0813 17:26:21.488470    4162 buildroot.go:174] setting up certificates
	I0813 17:26:21.488475    4162 provision.go:84] configureAuth start
	I0813 17:26:21.488478    4162 provision.go:143] copyHostCerts
	I0813 17:26:21.488526    4162 exec_runner.go:144] found /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.pem, removing ...
	I0813 17:26:21.488531    4162 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.pem
	I0813 17:26:21.488807    4162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.pem (1082 bytes)
	I0813 17:26:21.488983    4162 exec_runner.go:144] found /Users/jenkins/minikube-integration/19429-1127/.minikube/cert.pem, removing ...
	I0813 17:26:21.488987    4162 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19429-1127/.minikube/cert.pem
	I0813 17:26:21.489030    4162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19429-1127/.minikube/cert.pem (1123 bytes)
	I0813 17:26:21.489131    4162 exec_runner.go:144] found /Users/jenkins/minikube-integration/19429-1127/.minikube/key.pem, removing ...
	I0813 17:26:21.489136    4162 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19429-1127/.minikube/key.pem
	I0813 17:26:21.489178    4162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19429-1127/.minikube/key.pem (1675 bytes)
	I0813 17:26:21.489269    4162 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-126000 san=[127.0.0.1 localhost minikube running-upgrade-126000]
	I0813 17:26:21.719185    4162 provision.go:177] copyRemoteCerts
	I0813 17:26:21.719232    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 17:26:21.719247    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/running-upgrade-126000/id_rsa Username:docker}
	I0813 17:26:21.753738    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 17:26:21.760443    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0813 17:26:21.767380    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 17:26:21.774184    4162 provision.go:87] duration metric: took 285.702875ms to configureAuth
	I0813 17:26:21.774193    4162 buildroot.go:189] setting minikube options for container-runtime
	I0813 17:26:21.774293    4162 config.go:182] Loaded profile config "running-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:26:21.774327    4162 main.go:141] libmachine: Using SSH client type: native
	I0813 17:26:21.774425    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030145a0] 0x103016e00 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0813 17:26:21.774434    4162 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0813 17:26:21.839405    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0813 17:26:21.839416    4162 buildroot.go:70] root file system type: tmpfs
	I0813 17:26:21.839466    4162 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0813 17:26:21.839513    4162 main.go:141] libmachine: Using SSH client type: native
	I0813 17:26:21.839630    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030145a0] 0x103016e00 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0813 17:26:21.839664    4162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0813 17:26:21.906088    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0813 17:26:21.906132    4162 main.go:141] libmachine: Using SSH client type: native
	I0813 17:26:21.906256    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030145a0] 0x103016e00 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0813 17:26:21.906264    4162 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0813 17:26:21.973207    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0813 17:26:21.973219    4162 machine.go:97] duration metric: took 683.9655ms to provisionDockerMachine
	I0813 17:26:21.973225    4162 start.go:293] postStartSetup for "running-upgrade-126000" (driver="qemu2")
	I0813 17:26:21.973232    4162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 17:26:21.973278    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 17:26:21.973287    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/running-upgrade-126000/id_rsa Username:docker}
	I0813 17:26:22.007689    4162 ssh_runner.go:195] Run: cat /etc/os-release
	I0813 17:26:22.009474    4162 info.go:137] Remote host: Buildroot 2021.02.12
	I0813 17:26:22.009481    4162 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19429-1127/.minikube/addons for local assets ...
	I0813 17:26:22.009550    4162 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19429-1127/.minikube/files for local assets ...
	I0813 17:26:22.009635    4162 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem -> 16352.pem in /etc/ssl/certs
	I0813 17:26:22.009725    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0813 17:26:22.012148    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem --> /etc/ssl/certs/16352.pem (1708 bytes)
	I0813 17:26:22.018698    4162 start.go:296] duration metric: took 45.468208ms for postStartSetup
	I0813 17:26:22.018710    4162 fix.go:56] duration metric: took 741.916958ms for fixHost
	I0813 17:26:22.018740    4162 main.go:141] libmachine: Using SSH client type: native
	I0813 17:26:22.018843    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030145a0] 0x103016e00 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0813 17:26:22.018847    4162 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0813 17:26:22.082447    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723595182.115283419
	
	I0813 17:26:22.082453    4162 fix.go:216] guest clock: 1723595182.115283419
	I0813 17:26:22.082457    4162 fix.go:229] Guest: 2024-08-13 17:26:22.115283419 -0700 PDT Remote: 2024-08-13 17:26:22.018712 -0700 PDT m=+0.842972376 (delta=96.571419ms)
	I0813 17:26:22.082468    4162 fix.go:200] guest clock delta is within tolerance: 96.571419ms
	I0813 17:26:22.082470    4162 start.go:83] releasing machines lock for "running-upgrade-126000", held for 805.68675ms
	I0813 17:26:22.082518    4162 ssh_runner.go:195] Run: cat /version.json
	I0813 17:26:22.082527    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/running-upgrade-126000/id_rsa Username:docker}
	I0813 17:26:22.082556    4162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0813 17:26:22.082571    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/running-upgrade-126000/id_rsa Username:docker}
	W0813 17:26:22.116348    4162 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0813 17:26:22.116408    4162 ssh_runner.go:195] Run: systemctl --version
	I0813 17:26:22.118221    4162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0813 17:26:22.120010    4162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0813 17:26:22.120036    4162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0813 17:26:22.123277    4162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0813 17:26:22.127813    4162 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0813 17:26:22.127820    4162 start.go:495] detecting cgroup driver to use...
	I0813 17:26:22.127884    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 17:26:22.133248    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0813 17:26:22.136149    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0813 17:26:22.139591    4162 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0813 17:26:22.139617    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0813 17:26:22.143323    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0813 17:26:22.147668    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0813 17:26:22.151094    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0813 17:26:22.153925    4162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0813 17:26:22.156921    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0813 17:26:22.159831    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0813 17:26:22.162826    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0813 17:26:22.166092    4162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 17:26:22.168510    4162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 17:26:22.171484    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:26:22.266988    4162 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0813 17:26:22.274437    4162 start.go:495] detecting cgroup driver to use...
	I0813 17:26:22.274517    4162 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0813 17:26:22.282690    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0813 17:26:22.287764    4162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0813 17:26:22.293866    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0813 17:26:22.298666    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0813 17:26:22.303187    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 17:26:22.308654    4162 ssh_runner.go:195] Run: which cri-dockerd
	I0813 17:26:22.309933    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0813 17:26:22.312500    4162 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0813 17:26:22.317171    4162 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0813 17:26:22.415136    4162 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0813 17:26:22.509080    4162 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0813 17:26:22.509136    4162 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0813 17:26:22.514434    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:26:22.607614    4162 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0813 17:26:25.520151    4162 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.912560875s)
	I0813 17:26:25.520228    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0813 17:26:25.524992    4162 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0813 17:26:25.532054    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0813 17:26:25.536949    4162 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0813 17:26:25.617964    4162 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0813 17:26:25.714050    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:26:25.798945    4162 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0813 17:26:25.805679    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0813 17:26:25.810754    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:26:25.890053    4162 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0813 17:26:25.933593    4162 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0813 17:26:25.933664    4162 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0813 17:26:25.935813    4162 start.go:563] Will wait 60s for crictl version
	I0813 17:26:25.935863    4162 ssh_runner.go:195] Run: which crictl
	I0813 17:26:25.937246    4162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0813 17:26:25.948631    4162 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0813 17:26:25.948696    4162 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0813 17:26:25.961210    4162 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0813 17:26:25.981397    4162 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0813 17:26:25.981464    4162 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0813 17:26:25.982716    4162 kubeadm.go:883] updating cluster {Name:running-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0813 17:26:25.982757    4162 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0813 17:26:25.982794    4162 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0813 17:26:25.993092    4162 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0813 17:26:25.993101    4162 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0813 17:26:25.993146    4162 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0813 17:26:25.996327    4162 ssh_runner.go:195] Run: which lz4
	I0813 17:26:25.997640    4162 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0813 17:26:25.998986    4162 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0813 17:26:25.998999    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0813 17:26:26.937099    4162 docker.go:649] duration metric: took 939.502583ms to copy over tarball
	I0813 17:26:26.937157    4162 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 17:26:28.062638    4162 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.125484583s)
	I0813 17:26:28.062654    4162 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0813 17:26:28.078802    4162 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0813 17:26:28.081843    4162 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0813 17:26:28.087095    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:26:28.158057    4162 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0813 17:26:29.363932    4162 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.205876208s)
	I0813 17:26:29.364033    4162 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0813 17:26:29.375057    4162 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0813 17:26:29.375066    4162 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0813 17:26:29.375071    4162 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0813 17:26:29.379413    4162 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:26:29.382127    4162 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:26:29.384553    4162 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:26:29.384625    4162 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:26:29.387026    4162 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:26:29.387112    4162 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:26:29.388312    4162 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0813 17:26:29.388397    4162 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:26:29.390092    4162 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:26:29.390115    4162 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:26:29.391027    4162 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0813 17:26:29.391063    4162 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:26:29.391985    4162 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:26:29.392003    4162 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:26:29.392859    4162 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:26:29.393380    4162 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:26:29.800547    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0813 17:26:29.813113    4162 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0813 17:26:29.813144    4162 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:26:29.813205    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0813 17:26:29.814146    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:26:29.823283    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:26:29.829414    4162 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0813 17:26:29.834576    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0813 17:26:29.836314    4162 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0813 17:26:29.836332    4162 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:26:29.836362    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:26:29.838940    4162 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0813 17:26:29.838957    4162 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:26:29.838998    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0813 17:26:29.847375    4162 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0813 17:26:29.847502    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:26:29.855440    4162 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0813 17:26:29.855460    4162 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0813 17:26:29.855512    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0813 17:26:29.855517    4162 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0813 17:26:29.861044    4162 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0813 17:26:29.862759    4162 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0813 17:26:29.862775    4162 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:26:29.862813    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:26:29.882420    4162 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0813 17:26:29.882544    4162 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0813 17:26:29.882657    4162 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0813 17:26:29.882714    4162 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0813 17:26:29.884558    4162 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0813 17:26:29.884569    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0813 17:26:29.884571    4162 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0813 17:26:29.884581    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0813 17:26:29.884852    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:26:29.911554    4162 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0813 17:26:29.911574    4162 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:26:29.911615    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:26:29.920811    4162 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0813 17:26:29.920827    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0813 17:26:29.926130    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:26:29.945882    4162 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0813 17:26:29.978880    4162 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0813 17:26:29.978902    4162 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0813 17:26:29.978907    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0813 17:26:29.978906    4162 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0813 17:26:29.978925    4162 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:26:29.978978    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:26:29.990776    4162 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0813 17:26:30.053698    4162 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0813 17:26:30.167994    4162 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0813 17:26:30.168106    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:26:30.184584    4162 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0813 17:26:30.184608    4162 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:26:30.184659    4162 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:26:30.266747    4162 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 17:26:30.266877    4162 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0813 17:26:30.268810    4162 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0813 17:26:30.268832    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0813 17:26:30.314301    4162 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 17:26:30.314315    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0813 17:26:30.607499    4162 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 17:26:30.607541    4162 cache_images.go:92] duration metric: took 1.232480917s to LoadCachedImages
	W0813 17:26:30.607581    4162 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0813 17:26:30.607589    4162 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0813 17:26:30.607636    4162 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-126000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0813 17:26:30.607699    4162 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0813 17:26:30.631378    4162 cni.go:84] Creating CNI manager for ""
	I0813 17:26:30.631390    4162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:26:30.631396    4162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0813 17:26:30.631404    4162 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-126000 NodeName:running-upgrade-126000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0813 17:26:30.631469    4162 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-126000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 17:26:30.631524    4162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0813 17:26:30.634727    4162 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 17:26:30.634762    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 17:26:30.637822    4162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0813 17:26:30.642515    4162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 17:26:30.647167    4162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0813 17:26:30.652062    4162 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0813 17:26:30.653394    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:26:30.738805    4162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 17:26:30.743231    4162 certs.go:68] Setting up /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000 for IP: 10.0.2.15
	I0813 17:26:30.743240    4162 certs.go:194] generating shared ca certs ...
	I0813 17:26:30.743248    4162 certs.go:226] acquiring lock for ca certs: {Name:mk1c25d4292e2fe754770039b132c434f4539a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:26:30.743394    4162 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.key
	I0813 17:26:30.743430    4162 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/proxy-client-ca.key
	I0813 17:26:30.743435    4162 certs.go:256] generating profile certs ...
	I0813 17:26:30.743489    4162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/client.key
	I0813 17:26:30.743509    4162 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.key.9dc643d5
	I0813 17:26:30.743517    4162 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.crt.9dc643d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0813 17:26:30.795461    4162 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.crt.9dc643d5 ...
	I0813 17:26:30.795466    4162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.crt.9dc643d5: {Name:mkb778aefdf534d9d175b9bef3057ccc299bd1e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:26:30.795892    4162 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.key.9dc643d5 ...
	I0813 17:26:30.795902    4162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.key.9dc643d5: {Name:mkb988978400f5fa411160dc83df48d3129b60a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:26:30.796051    4162 certs.go:381] copying /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.crt.9dc643d5 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.crt
	I0813 17:26:30.796198    4162 certs.go:385] copying /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.key.9dc643d5 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.key
	I0813 17:26:30.796332    4162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/proxy-client.key
	I0813 17:26:30.796454    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/1635.pem (1338 bytes)
	W0813 17:26:30.796476    4162 certs.go:480] ignoring /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/1635_empty.pem, impossibly tiny 0 bytes
	I0813 17:26:30.796480    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 17:26:30.796501    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem (1082 bytes)
	I0813 17:26:30.796518    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem (1123 bytes)
	I0813 17:26:30.796541    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/key.pem (1675 bytes)
	I0813 17:26:30.796579    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem (1708 bytes)
	I0813 17:26:30.796915    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 17:26:30.803511    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 17:26:30.809704    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 17:26:30.822741    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 17:26:30.829436    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0813 17:26:30.837172    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 17:26:30.849229    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 17:26:30.857327    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 17:26:30.872146    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 17:26:30.894092    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/1635.pem --> /usr/share/ca-certificates/1635.pem (1338 bytes)
	I0813 17:26:30.903187    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem --> /usr/share/ca-certificates/16352.pem (1708 bytes)
	I0813 17:26:30.910247    4162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 17:26:30.915551    4162 ssh_runner.go:195] Run: openssl version
	I0813 17:26:30.917369    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 17:26:30.920360    4162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 17:26:30.921853    4162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I0813 17:26:30.921874    4162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 17:26:30.923551    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 17:26:30.926176    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1635.pem && ln -fs /usr/share/ca-certificates/1635.pem /etc/ssl/certs/1635.pem"
	I0813 17:26:30.929155    4162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1635.pem
	I0813 17:26:30.930615    4162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:53 /usr/share/ca-certificates/1635.pem
	I0813 17:26:30.930635    4162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1635.pem
	I0813 17:26:30.932518    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1635.pem /etc/ssl/certs/51391683.0"
	I0813 17:26:30.935593    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16352.pem && ln -fs /usr/share/ca-certificates/16352.pem /etc/ssl/certs/16352.pem"
	I0813 17:26:30.939273    4162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16352.pem
	I0813 17:26:30.940640    4162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:53 /usr/share/ca-certificates/16352.pem
	I0813 17:26:30.940667    4162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16352.pem
	I0813 17:26:30.942504    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16352.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 17:26:30.945198    4162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0813 17:26:30.946704    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0813 17:26:30.948320    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0813 17:26:30.950144    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0813 17:26:30.951997    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0813 17:26:30.953890    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0813 17:26:30.955524    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0813 17:26:30.957410    4162 kubeadm.go:392] StartCluster: {Name:running-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0813 17:26:30.957471    4162 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0813 17:26:30.967479    4162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 17:26:30.970772    4162 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0813 17:26:30.970777    4162 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0813 17:26:30.970798    4162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0813 17:26:30.973436    4162 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 17:26:30.973695    4162 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-126000" does not appear in /Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:26:30.973753    4162 kubeconfig.go:62] /Users/jenkins/minikube-integration/19429-1127/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-126000" cluster setting kubeconfig missing "running-upgrade-126000" context setting]
	I0813 17:26:30.973917    4162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/kubeconfig: {Name:mk4f6a628d9f9f6550ed229faba2a879ed685a75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:26:30.974582    4162 kapi.go:59] client config for running-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045cbe30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 17:26:30.974892    4162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 17:26:30.977659    4162 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-126000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0813 17:26:30.977665    4162 kubeadm.go:1160] stopping kube-system containers ...
	I0813 17:26:30.977705    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0813 17:26:30.988723    4162 docker.go:483] Stopping containers: [711b1c09ff24 71aebb6e34cf 544ef875020d d351e8345d50 cc436702f838 e007cb7130c9 2f4b7ca98454 6a4674d869c5 d06e29ee9496 4c9c0bc1813a fe8d6a8e1434 9cdbfd227628 4f718d28b77f 2f19f045e3b3 50196f3f05ff 46ac5625701c]
	I0813 17:26:30.988811    4162 ssh_runner.go:195] Run: docker stop 711b1c09ff24 71aebb6e34cf 544ef875020d d351e8345d50 cc436702f838 e007cb7130c9 2f4b7ca98454 6a4674d869c5 d06e29ee9496 4c9c0bc1813a fe8d6a8e1434 9cdbfd227628 4f718d28b77f 2f19f045e3b3 50196f3f05ff 46ac5625701c
	I0813 17:26:31.066718    4162 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0813 17:26:31.139072    4162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 17:26:31.142570    4162 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug 14 00:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 14 00:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 14 00:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 14 00:26 /etc/kubernetes/scheduler.conf
	
	I0813 17:26:31.142602    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0813 17:26:31.145419    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 17:26:31.145448    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0813 17:26:31.148375    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0813 17:26:31.151402    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 17:26:31.151433    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0813 17:26:31.154269    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0813 17:26:31.156895    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 17:26:31.156921    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 17:26:31.159800    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0813 17:26:31.164423    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 17:26:31.164477    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 17:26:31.170479    4162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 17:26:31.173923    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:26:31.197297    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:26:31.604977    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:26:31.811365    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:26:31.835297    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:26:31.863925    4162 api_server.go:52] waiting for apiserver process to appear ...
	I0813 17:26:31.864006    4162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:26:32.367153    4162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:26:32.866380    4162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:26:33.366058    4162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:26:33.370373    4162 api_server.go:72] duration metric: took 1.506469417s to wait for apiserver process to appear ...
	I0813 17:26:33.370384    4162 api_server.go:88] waiting for apiserver healthz status ...
	I0813 17:26:33.370396    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:26:38.372623    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:26:38.372693    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:26:43.373385    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:26:43.373453    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:26:48.374088    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:26:48.374128    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:26:53.374885    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:26:53.375022    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:26:58.376712    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:26:58.376749    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:27:03.378147    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:27:03.378191    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:27:08.379983    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:27:08.380033    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:27:13.382401    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:27:13.382483    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:27:18.385572    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:27:18.385612    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:27:23.385997    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:27:23.386265    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:27:28.388878    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:27:28.388955    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:27:33.390478    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:27:33.390841    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:27:33.426387    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:27:33.426522    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:27:33.447185    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:27:33.447287    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:27:33.462016    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:27:33.462079    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:27:33.473950    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:27:33.474035    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:27:33.484933    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:27:33.484999    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:27:33.500301    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:27:33.500364    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:27:33.510766    4162 logs.go:276] 0 containers: []
	W0813 17:27:33.510779    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:27:33.510838    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:27:33.521396    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:27:33.521412    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:27:33.521417    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:27:33.525991    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:27:33.525999    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:27:33.599815    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:27:33.599829    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:27:33.612204    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:27:33.612217    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:27:33.652032    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:27:33.652042    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:27:33.664386    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:27:33.664398    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:27:33.681654    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:27:33.681669    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:27:33.692781    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:27:33.692791    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:27:33.707742    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:27:33.707760    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:27:33.726239    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:27:33.726251    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:27:33.737512    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:27:33.737529    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:27:33.749305    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:27:33.749316    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:27:33.763101    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:27:33.763110    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:27:33.785620    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:27:33.785632    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:27:33.800073    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:27:33.800083    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:27:33.813652    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:27:33.813664    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:27:33.839971    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:27:33.839980    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:27:36.355739    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:27:41.358561    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:27:41.358969    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:27:41.397673    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:27:41.397810    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:27:41.419929    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:27:41.420027    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:27:41.435476    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:27:41.435553    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:27:41.447732    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:27:41.447800    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:27:41.458676    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:27:41.458741    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:27:41.469630    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:27:41.469696    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:27:41.480775    4162 logs.go:276] 0 containers: []
	W0813 17:27:41.480787    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:27:41.480836    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:27:41.492244    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:27:41.492262    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:27:41.492267    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:27:41.497419    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:27:41.497428    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:27:41.514257    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:27:41.514267    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:27:41.528797    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:27:41.528808    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:27:41.540608    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:27:41.540619    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:27:41.577706    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:27:41.577712    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:27:41.614201    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:27:41.614212    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:27:41.634638    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:27:41.634649    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:27:41.646085    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:27:41.646099    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:27:41.659796    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:27:41.659807    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:27:41.674024    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:27:41.674033    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:27:41.685725    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:27:41.685736    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:27:41.702816    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:27:41.702827    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:27:41.727555    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:27:41.727562    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:27:41.740558    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:27:41.740567    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:27:41.754656    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:27:41.754666    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:27:41.765998    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:27:41.766011    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:27:44.279197    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:27:49.281946    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:27:49.282358    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:27:49.320699    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:27:49.320843    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:27:49.342430    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:27:49.342567    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:27:49.362462    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:27:49.362535    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:27:49.374903    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:27:49.374967    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:27:49.384894    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:27:49.384958    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:27:49.395691    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:27:49.395756    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:27:49.406086    4162 logs.go:276] 0 containers: []
	W0813 17:27:49.406098    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:27:49.406158    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:27:49.417224    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:27:49.417241    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:27:49.417259    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:27:49.455865    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:27:49.455872    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:27:49.473078    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:27:49.473088    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:27:49.484763    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:27:49.484776    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:27:49.498623    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:27:49.498634    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:27:49.523620    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:27:49.523630    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:27:49.539964    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:27:49.539976    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:27:49.554034    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:27:49.554046    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:27:49.567665    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:27:49.567677    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:27:49.584777    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:27:49.584788    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:27:49.596589    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:27:49.596600    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:27:49.608988    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:27:49.609001    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:27:49.613746    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:27:49.613754    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:27:49.650174    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:27:49.650185    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:27:49.664969    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:27:49.664981    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:27:49.676957    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:27:49.676967    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:27:49.689048    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:27:49.689058    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:27:52.211720    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:27:57.214062    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:27:57.214276    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:27:57.234257    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:27:57.234352    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:27:57.248437    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:27:57.248513    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:27:57.260544    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:27:57.260608    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:27:57.271161    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:27:57.271223    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:27:57.281413    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:27:57.281473    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:27:57.291603    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:27:57.291666    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:27:57.301325    4162 logs.go:276] 0 containers: []
	W0813 17:27:57.301335    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:27:57.301384    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:27:57.312174    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:27:57.312190    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:27:57.312194    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:27:57.331486    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:27:57.331496    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:27:57.342715    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:27:57.342730    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:27:57.361298    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:27:57.361309    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:27:57.387787    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:27:57.387794    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:27:57.391776    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:27:57.391782    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:27:57.402888    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:27:57.402898    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:27:57.416868    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:27:57.416877    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:27:57.433877    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:27:57.433887    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:27:57.445440    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:27:57.445450    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:27:57.456311    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:27:57.456321    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:27:57.468642    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:27:57.468653    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:27:57.508228    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:27:57.508239    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:27:57.544045    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:27:57.544059    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:27:57.558201    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:27:57.558215    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:27:57.572365    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:27:57.572375    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:27:57.583676    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:27:57.583686    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:28:00.097493    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:28:05.099387    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:28:05.099840    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:28:05.133446    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:28:05.133579    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:28:05.152757    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:28:05.152855    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:28:05.166764    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:28:05.166836    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:28:05.179079    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:28:05.179142    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:28:05.190008    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:28:05.190075    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:28:05.201100    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:28:05.201171    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:28:05.211301    4162 logs.go:276] 0 containers: []
	W0813 17:28:05.211311    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:28:05.211367    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:28:05.222032    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:28:05.222048    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:28:05.222053    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:28:05.242269    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:28:05.242278    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:28:05.254192    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:28:05.254206    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:28:05.267025    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:28:05.267038    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:28:05.278355    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:28:05.278369    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:28:05.304292    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:28:05.304302    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:28:05.338886    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:28:05.338894    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:28:05.350809    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:28:05.350821    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:28:05.368892    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:28:05.368903    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:28:05.380927    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:28:05.380937    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:28:05.395568    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:28:05.395580    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:28:05.409040    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:28:05.409052    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:28:05.426997    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:28:05.427007    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:28:05.438564    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:28:05.438577    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:28:05.449629    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:28:05.449642    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:28:05.488564    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:28:05.488572    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:28:05.492645    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:28:05.492651    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:28:08.008742    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:28:13.009765    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:28:13.010045    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:28:13.048591    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:28:13.048708    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:28:13.069167    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:28:13.069247    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:28:13.087211    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:28:13.087288    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:28:13.109001    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:28:13.109059    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:28:13.121808    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:28:13.121884    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:28:13.133679    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:28:13.133720    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:28:13.144918    4162 logs.go:276] 0 containers: []
	W0813 17:28:13.144930    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:28:13.144966    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:28:13.155887    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:28:13.155903    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:28:13.155908    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:28:13.168149    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:28:13.168163    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:28:13.180715    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:28:13.180727    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:28:13.199694    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:28:13.199709    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:28:13.212753    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:28:13.212767    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:28:13.238305    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:28:13.238313    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:28:13.252149    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:28:13.252158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:28:13.263511    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:28:13.263521    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:28:13.275765    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:28:13.275773    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:28:13.280097    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:28:13.280105    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:28:13.317795    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:28:13.317806    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:28:13.341840    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:28:13.341855    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:28:13.355563    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:28:13.355578    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:28:13.368664    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:28:13.368681    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:28:13.409726    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:28:13.409748    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:28:13.433931    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:28:13.433957    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:28:13.453826    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:28:13.453845    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:28:15.971557    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:28:20.973735    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:28:20.973813    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:28:20.987118    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:28:20.987171    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:28:20.998745    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:28:20.998799    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:28:21.009774    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:28:21.009851    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:28:21.027671    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:28:21.027735    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:28:21.037956    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:28:21.038021    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:28:21.048915    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:28:21.048974    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:28:21.059541    4162 logs.go:276] 0 containers: []
	W0813 17:28:21.059551    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:28:21.059603    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:28:21.069956    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:28:21.069974    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:28:21.069979    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:28:21.110068    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:28:21.110080    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:28:21.127615    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:28:21.127629    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:28:21.149712    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:28:21.149725    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:28:21.168759    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:28:21.168772    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:28:21.181101    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:28:21.181110    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:28:21.194831    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:28:21.194840    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:28:21.212570    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:28:21.212581    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:28:21.230169    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:28:21.230179    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:28:21.249559    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:28:21.249572    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:28:21.261125    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:28:21.261136    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:28:21.285490    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:28:21.285501    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:28:21.318833    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:28:21.318846    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:28:21.336524    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:28:21.336534    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:28:21.349372    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:28:21.349384    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:28:21.354143    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:28:21.354150    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:28:21.368094    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:28:21.368105    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:28:23.879457    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:28:28.882017    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:28:28.882376    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:28:28.921498    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:28:28.921625    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:28:28.940744    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:28:28.940831    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:28:28.954932    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:28:28.955003    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:28:28.971828    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:28:28.971891    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:28:28.981784    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:28:28.981849    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:28:28.992381    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:28:28.992448    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:28:29.002697    4162 logs.go:276] 0 containers: []
	W0813 17:28:29.002708    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:28:29.002762    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:28:29.013755    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:28:29.013771    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:28:29.013778    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:28:29.025194    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:28:29.025205    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:28:29.051371    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:28:29.051379    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:28:29.062604    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:28:29.062615    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:28:29.067075    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:28:29.067081    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:28:29.080807    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:28:29.080820    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:28:29.092247    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:28:29.092257    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:28:29.103682    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:28:29.103691    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:28:29.142695    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:28:29.142702    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:28:29.177128    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:28:29.177139    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:28:29.191372    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:28:29.191381    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:28:29.203016    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:28:29.203025    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:28:29.214357    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:28:29.214371    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:28:29.227991    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:28:29.228001    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:28:29.247823    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:28:29.247834    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:28:29.259690    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:28:29.259700    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:28:29.277123    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:28:29.277136    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:28:31.799025    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:28:36.801186    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:28:36.801509    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:28:36.830899    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:28:36.831034    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:28:36.848135    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:28:36.848217    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:28:36.865676    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:28:36.865737    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:28:36.878371    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:28:36.878439    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:28:36.889018    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:28:36.889077    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:28:36.900067    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:28:36.900130    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:28:36.910124    4162 logs.go:276] 0 containers: []
	W0813 17:28:36.910137    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:28:36.910186    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:28:36.921081    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:28:36.921100    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:28:36.921107    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:28:36.932603    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:28:36.932615    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:28:36.945167    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:28:36.945178    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:28:36.984940    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:28:36.984954    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:28:37.001245    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:28:37.001257    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:28:37.026794    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:28:37.026806    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:28:37.038400    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:28:37.038412    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:28:37.052877    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:28:37.052888    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:28:37.065192    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:28:37.065203    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:28:37.091195    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:28:37.091209    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:28:37.095873    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:28:37.095884    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:28:37.131468    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:28:37.131478    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:28:37.143271    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:28:37.143284    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:28:37.155244    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:28:37.155257    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:28:37.170219    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:28:37.170229    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:28:37.197106    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:28:37.197117    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:28:37.214927    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:28:37.214937    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:28:39.726934    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:28:44.729180    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:28:44.729287    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:28:44.745599    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:28:44.745674    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:28:44.757678    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:28:44.757760    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:28:44.769864    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:28:44.769942    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:28:44.782627    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:28:44.782701    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:28:44.799075    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:28:44.799150    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:28:44.817666    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:28:44.817736    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:28:44.829498    4162 logs.go:276] 0 containers: []
	W0813 17:28:44.829509    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:28:44.829569    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:28:44.842458    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:28:44.842477    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:28:44.842483    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:28:44.862238    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:28:44.862258    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:28:44.876264    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:28:44.876275    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:28:44.892270    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:28:44.892281    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:28:44.908050    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:28:44.908067    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:28:44.923880    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:28:44.923900    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:28:44.936913    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:28:44.936925    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:28:44.981867    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:28:44.981882    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:28:45.004032    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:28:45.004052    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:28:45.019616    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:28:45.019629    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:28:45.035771    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:28:45.035782    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:28:45.062196    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:28:45.062212    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:28:45.105773    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:28:45.105794    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:28:45.111258    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:28:45.111269    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:28:45.130297    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:28:45.130321    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:28:45.143921    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:28:45.143939    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:28:45.156899    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:28:45.156911    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:28:47.674561    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:28:52.676921    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:28:52.677298    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:28:52.721964    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:28:52.722104    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:28:52.741228    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:28:52.741320    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:28:52.755643    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:28:52.755717    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:28:52.767730    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:28:52.767796    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:28:52.782759    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:28:52.782834    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:28:52.794283    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:28:52.794349    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:28:52.805504    4162 logs.go:276] 0 containers: []
	W0813 17:28:52.805518    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:28:52.805577    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:28:52.817755    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:28:52.817776    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:28:52.817782    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:28:52.830043    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:28:52.830056    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:28:52.841304    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:28:52.841315    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:28:52.878046    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:28:52.878053    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:28:52.914642    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:28:52.914653    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:28:52.931838    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:28:52.931848    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:28:52.949200    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:28:52.949209    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:28:52.954035    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:28:52.954044    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:28:52.968059    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:28:52.968069    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:28:52.979583    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:28:52.979594    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:28:52.991194    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:28:52.991204    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:28:53.004775    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:28:53.004785    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:28:53.015929    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:28:53.015942    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:28:53.029629    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:28:53.029639    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:28:53.053668    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:28:53.053676    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:28:53.073316    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:28:53.073327    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:28:53.085374    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:28:53.085386    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:28:55.599239    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:00.601394    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:00.601506    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:00.612524    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:00.612607    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:00.623513    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:00.623589    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:00.634900    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:00.634970    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:00.645758    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:00.645826    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:00.656988    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:00.657056    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:00.667922    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:00.667989    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:00.678241    4162 logs.go:276] 0 containers: []
	W0813 17:29:00.678254    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:00.678316    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:00.689438    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:00.689457    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:00.689466    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:00.710167    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:00.710179    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:00.728153    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:00.728163    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:00.740101    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:00.740112    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:00.752077    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:00.752087    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:00.763684    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:00.763694    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:00.775129    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:00.775141    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:00.779404    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:00.779413    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:00.816090    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:00.816102    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:00.831222    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:00.831233    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:00.845798    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:00.845813    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:00.863183    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:00.863194    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:00.901408    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:00.901426    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:00.913063    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:00.913077    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:00.924940    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:00.924952    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:00.941214    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:00.941229    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:00.955976    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:00.955991    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:03.483574    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:08.486056    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:08.486170    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:08.500934    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:08.500999    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:08.516949    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:08.517018    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:08.528220    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:08.528281    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:08.538569    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:08.538642    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:08.549338    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:08.549400    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:08.559972    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:08.560034    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:08.570472    4162 logs.go:276] 0 containers: []
	W0813 17:29:08.570485    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:08.570544    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:08.581487    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:08.581505    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:08.581511    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:08.595800    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:08.595813    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:08.607856    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:08.607867    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:08.619788    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:08.619798    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:08.643914    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:08.643924    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:08.656909    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:08.656919    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:08.676283    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:08.676294    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:08.688978    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:08.688990    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:08.700557    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:08.700568    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:08.711713    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:08.711725    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:08.748129    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:08.748143    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:08.752816    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:08.752822    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:08.766630    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:08.766641    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:08.778121    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:08.778131    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:08.795519    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:08.795529    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:08.833599    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:08.833617    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:08.857041    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:08.857050    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:11.374844    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:16.376334    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:16.376513    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:16.392020    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:16.392096    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:16.403290    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:16.403391    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:16.413822    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:16.413879    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:16.424543    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:16.424617    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:16.435231    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:16.435297    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:16.450668    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:16.450737    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:16.463465    4162 logs.go:276] 0 containers: []
	W0813 17:29:16.463476    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:16.463531    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:16.474451    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:16.474469    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:16.474475    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:16.492001    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:16.492011    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:16.506097    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:16.506110    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:16.517855    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:16.517867    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:16.535538    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:16.535549    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:16.546771    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:16.546781    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:16.586034    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:16.586050    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:16.606403    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:16.606415    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:16.643970    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:16.643983    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:16.663053    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:16.663072    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:16.674505    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:16.674516    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:16.685819    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:16.685830    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:16.700181    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:16.700191    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:16.711688    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:16.711699    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:16.727402    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:16.727414    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:16.768715    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:16.768728    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:16.792907    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:16.792917    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:19.299668    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:24.302381    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:24.302832    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:24.344489    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:24.344616    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:24.368264    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:24.368367    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:24.382924    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:24.383017    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:24.395330    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:24.395406    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:24.406288    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:24.406350    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:24.416827    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:24.416892    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:24.431580    4162 logs.go:276] 0 containers: []
	W0813 17:29:24.431592    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:24.431656    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:24.442009    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:24.442024    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:24.442029    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:24.456120    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:24.456131    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:24.467782    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:24.467797    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:24.479075    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:24.479089    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:24.490566    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:24.490579    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:24.505105    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:24.505115    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:24.520390    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:24.520405    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:24.531727    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:24.531738    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:24.549235    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:24.549248    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:24.560072    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:24.560084    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:24.599274    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:24.599286    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:24.621498    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:24.621511    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:24.646099    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:24.646106    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:24.685283    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:24.685294    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:24.689519    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:24.689527    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:24.708995    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:24.709006    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:24.725992    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:24.726005    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:27.239269    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:32.241896    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:32.242075    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:32.253408    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:32.253481    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:32.264374    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:32.264447    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:32.274887    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:32.274954    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:32.285449    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:32.285519    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:32.296119    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:32.296181    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:32.306843    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:32.306915    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:32.324805    4162 logs.go:276] 0 containers: []
	W0813 17:29:32.324818    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:32.324878    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:32.337713    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:32.337735    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:32.337740    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:32.351976    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:32.351986    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:32.369863    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:32.369873    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:32.381638    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:32.381649    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:32.393126    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:32.393137    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:32.406019    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:32.406030    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:32.410816    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:32.410822    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:32.455611    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:32.455622    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:32.476148    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:32.476158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:32.490924    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:32.490937    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:32.502687    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:32.502698    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:32.514562    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:32.514572    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:32.537818    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:32.537826    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:32.576017    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:32.576028    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:32.593095    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:32.593104    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:32.604528    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:32.604539    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:32.619494    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:32.619504    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:35.133579    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:40.135777    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:40.136102    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:40.163657    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:40.163768    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:40.181578    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:40.181671    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:40.194976    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:40.195050    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:40.207171    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:40.207237    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:40.217273    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:40.217328    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:40.228006    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:40.228071    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:40.238460    4162 logs.go:276] 0 containers: []
	W0813 17:29:40.238472    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:40.238519    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:40.249164    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:40.249179    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:40.249184    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:40.283887    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:40.283903    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:40.297925    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:40.297937    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:40.323737    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:40.323748    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:40.337938    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:40.337949    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:40.349534    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:40.349543    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:40.361680    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:40.361692    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:40.379665    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:40.379673    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:40.418335    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:40.418344    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:40.422901    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:40.422908    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:40.434147    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:40.434158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:40.445995    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:40.446007    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:40.457668    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:40.457679    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:40.469422    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:40.469434    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:40.487026    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:40.487035    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:40.501220    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:40.501231    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:40.513006    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:40.513019    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:43.038313    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:48.041047    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:48.041171    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:48.052849    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:48.052934    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:48.065403    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:48.065492    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:48.077565    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:48.077648    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:48.096413    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:48.096487    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:48.108344    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:48.108418    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:48.120901    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:48.120974    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:48.132257    4162 logs.go:276] 0 containers: []
	W0813 17:29:48.132271    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:48.132335    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:48.144900    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:48.144919    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:48.144924    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:48.166976    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:48.166992    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:48.183538    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:48.183550    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:48.196318    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:48.196332    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:48.212389    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:48.212403    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:48.228945    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:48.228957    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:48.247917    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:48.247932    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:48.260700    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:48.260711    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:48.273594    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:48.273608    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:48.318606    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:48.318626    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:48.323792    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:48.323805    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:48.336916    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:48.336929    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:48.350223    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:48.350237    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:48.390100    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:48.390112    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:48.408353    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:48.408370    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:48.428262    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:48.428275    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:48.441620    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:48.441632    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:50.969214    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:55.971328    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:55.971438    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:55.982922    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:55.982992    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:55.993894    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:55.993960    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:56.004670    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:56.004740    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:56.015346    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:56.015416    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:56.026150    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:56.026219    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:56.036729    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:56.036798    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:56.050706    4162 logs.go:276] 0 containers: []
	W0813 17:29:56.050721    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:56.050774    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:56.061363    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:56.061388    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:56.061394    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:56.066364    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:56.066369    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:56.083660    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:56.083672    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:56.101779    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:56.101791    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:56.121398    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:56.121407    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:56.135103    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:56.135115    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:56.146734    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:56.146746    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:56.164700    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:56.164710    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:56.202623    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:56.202634    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:56.240111    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:56.240124    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:56.254666    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:56.254676    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:56.266069    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:56.266080    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:56.278225    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:56.278239    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:56.289943    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:56.289952    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:56.304448    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:56.304457    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:56.317835    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:56.317846    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:56.342121    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:56.342131    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:58.855988    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:03.856264    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:03.856520    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:30:03.870111    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:30:03.870185    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:30:03.880827    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:30:03.880899    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:30:03.891539    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:30:03.891609    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:30:03.902425    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:30:03.902499    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:30:03.915182    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:30:03.915249    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:30:03.932116    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:30:03.932189    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:30:03.942515    4162 logs.go:276] 0 containers: []
	W0813 17:30:03.942527    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:30:03.942586    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:30:03.953389    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:30:03.953409    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:30:03.953414    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:30:03.977147    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:30:03.977158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:30:03.996322    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:30:03.996335    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:30:04.007678    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:30:04.007688    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:30:04.020251    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:30:04.020263    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:30:04.058100    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:30:04.058111    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:30:04.062901    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:30:04.062908    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:30:04.098976    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:30:04.098990    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:30:04.113251    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:30:04.113261    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:30:04.137682    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:30:04.137691    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:30:04.149950    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:30:04.149961    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:30:04.164435    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:30:04.164446    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:30:04.175843    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:30:04.175854    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:30:04.189838    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:30:04.189848    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:30:04.201399    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:30:04.201411    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:30:04.218913    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:30:04.218922    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:30:04.230019    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:30:04.230031    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:30:06.744906    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:11.747085    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:11.747197    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:30:11.759128    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:30:11.759201    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:30:11.770734    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:30:11.770801    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:30:11.782443    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:30:11.782514    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:30:11.798836    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:30:11.798907    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:30:11.809633    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:30:11.809699    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:30:11.822485    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:30:11.822554    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:30:11.834129    4162 logs.go:276] 0 containers: []
	W0813 17:30:11.834140    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:30:11.834190    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:30:11.846042    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:30:11.846064    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:30:11.846073    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:30:11.892968    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:30:11.892983    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:30:11.906636    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:30:11.906650    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:30:11.930451    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:30:11.930469    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:30:11.943623    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:30:11.943637    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:30:11.983061    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:30:11.983078    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:30:11.999840    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:30:11.999854    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:30:12.016981    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:30:12.016993    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:30:12.029605    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:30:12.029620    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:30:12.041431    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:30:12.041442    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:30:12.060715    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:30:12.060726    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:30:12.078694    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:30:12.078704    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:30:12.090733    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:30:12.090745    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:30:12.095347    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:30:12.095353    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:30:12.108959    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:30:12.108970    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:30:12.126455    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:30:12.126465    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:30:12.138098    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:30:12.138110    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:30:14.654233    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:19.656390    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:19.656643    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:30:19.679235    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:30:19.679334    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:30:19.695134    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:30:19.695215    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:30:19.707810    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:30:19.707867    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:30:19.719162    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:30:19.719237    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:30:19.729455    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:30:19.729535    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:30:19.740269    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:30:19.740340    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:30:19.754383    4162 logs.go:276] 0 containers: []
	W0813 17:30:19.754397    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:30:19.754467    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:30:19.776553    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:30:19.776570    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:30:19.776576    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:30:19.814778    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:30:19.814790    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:30:19.834831    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:30:19.834841    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:30:19.849179    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:30:19.849191    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:30:19.862668    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:30:19.862679    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:30:19.902335    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:30:19.902345    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:30:19.919291    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:30:19.919302    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:30:19.936504    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:30:19.936515    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:30:19.942758    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:30:19.942768    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:30:19.979117    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:30:19.979139    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:30:19.992282    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:30:19.992294    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:30:20.010425    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:30:20.010438    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:30:20.023306    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:30:20.023319    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:30:20.037438    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:30:20.037451    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:30:20.048913    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:30:20.048924    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:30:20.060389    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:30:20.060398    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:30:20.071932    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:30:20.071943    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:30:22.597056    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:27.599389    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:27.599672    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:30:27.629970    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:30:27.630088    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:30:27.646735    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:30:27.646814    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:30:27.659837    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:30:27.659908    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:30:27.671630    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:30:27.671702    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:30:27.682276    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:30:27.682342    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:30:27.696401    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:30:27.696469    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:30:27.706966    4162 logs.go:276] 0 containers: []
	W0813 17:30:27.706977    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:30:27.707031    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:30:27.718120    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:30:27.718139    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:30:27.718144    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:30:27.737331    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:30:27.737341    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:30:27.748506    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:30:27.748517    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:30:27.759600    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:30:27.759610    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:30:27.771322    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:30:27.771334    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:30:27.782685    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:30:27.782694    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:30:27.823391    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:30:27.823412    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:30:27.827723    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:30:27.827731    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:30:27.861938    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:30:27.861947    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:30:27.884960    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:30:27.884971    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:30:27.896856    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:30:27.896873    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:30:27.908868    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:30:27.908878    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:30:27.929553    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:30:27.929564    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:30:27.943258    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:30:27.943268    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:30:27.954679    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:30:27.954689    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:30:27.969230    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:30:27.969240    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:30:27.986521    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:30:27.986533    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:30:30.509800    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:35.512223    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:35.512367    4162 kubeadm.go:597] duration metric: took 4m4.545122792s to restartPrimaryControlPlane
	W0813 17:30:35.512494    4162 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0813 17:30:35.512557    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0813 17:30:36.549383    4162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.036826792s)
	I0813 17:30:36.549448    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0813 17:30:36.554700    4162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 17:30:36.558267    4162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 17:30:36.561729    4162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 17:30:36.561739    4162 kubeadm.go:157] found existing configuration files:
	
	I0813 17:30:36.561777    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0813 17:30:36.564739    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0813 17:30:36.564781    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0813 17:30:36.567504    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0813 17:30:36.570447    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0813 17:30:36.570486    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0813 17:30:36.573628    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0813 17:30:36.576885    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0813 17:30:36.576915    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 17:30:36.579801    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0813 17:30:36.582477    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0813 17:30:36.582516    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 17:30:36.586013    4162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0813 17:30:36.603001    4162 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0813 17:30:36.603031    4162 kubeadm.go:310] [preflight] Running pre-flight checks
	I0813 17:30:36.652004    4162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 17:30:36.652060    4162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 17:30:36.652115    4162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0813 17:30:36.704672    4162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 17:30:36.708665    4162 out.go:204]   - Generating certificates and keys ...
	I0813 17:30:36.708698    4162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0813 17:30:36.708725    4162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0813 17:30:36.708756    4162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0813 17:30:36.708782    4162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0813 17:30:36.708810    4162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0813 17:30:36.708832    4162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0813 17:30:36.708872    4162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0813 17:30:36.708897    4162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0813 17:30:36.708927    4162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0813 17:30:36.708958    4162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0813 17:30:36.708974    4162 kubeadm.go:310] [certs] Using the existing "sa" key
	I0813 17:30:36.708996    4162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 17:30:36.824090    4162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 17:30:36.935114    4162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 17:30:37.112307    4162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 17:30:37.205538    4162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 17:30:37.233381    4162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 17:30:37.233906    4162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 17:30:37.233958    4162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0813 17:30:37.319903    4162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 17:30:37.323760    4162 out.go:204]   - Booting up control plane ...
	I0813 17:30:37.323804    4162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 17:30:37.323838    4162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 17:30:37.323867    4162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 17:30:37.323906    4162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 17:30:37.323980    4162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0813 17:30:42.324377    4162 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002598 seconds
	I0813 17:30:42.324525    4162 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 17:30:42.334780    4162 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 17:30:42.845063    4162 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0813 17:30:42.845181    4162 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-126000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 17:30:43.350168    4162 kubeadm.go:310] [bootstrap-token] Using token: zkcav3.7ynvfpmi1ev3k3bj
	I0813 17:30:43.356588    4162 out.go:204]   - Configuring RBAC rules ...
	I0813 17:30:43.356664    4162 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 17:30:43.356734    4162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 17:30:43.359212    4162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 17:30:43.364185    4162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 17:30:43.365496    4162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 17:30:43.367199    4162 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 17:30:43.370686    4162 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 17:30:43.514517    4162 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0813 17:30:43.754998    4162 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0813 17:30:43.755563    4162 kubeadm.go:310] 
	I0813 17:30:43.755598    4162 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0813 17:30:43.755608    4162 kubeadm.go:310] 
	I0813 17:30:43.755666    4162 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0813 17:30:43.755672    4162 kubeadm.go:310] 
	I0813 17:30:43.755688    4162 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0813 17:30:43.755727    4162 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 17:30:43.755758    4162 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 17:30:43.755762    4162 kubeadm.go:310] 
	I0813 17:30:43.755795    4162 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0813 17:30:43.755798    4162 kubeadm.go:310] 
	I0813 17:30:43.755839    4162 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 17:30:43.755860    4162 kubeadm.go:310] 
	I0813 17:30:43.755889    4162 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0813 17:30:43.755936    4162 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 17:30:43.756023    4162 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 17:30:43.756031    4162 kubeadm.go:310] 
	I0813 17:30:43.756080    4162 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0813 17:30:43.756129    4162 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0813 17:30:43.756135    4162 kubeadm.go:310] 
	I0813 17:30:43.756196    4162 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zkcav3.7ynvfpmi1ev3k3bj \
	I0813 17:30:43.756258    4162 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:94a653d9144e0f51dbf8cb0881c67d995fb93f16972a5a4e4bd9f3c8d4a5aa34 \
	I0813 17:30:43.756273    4162 kubeadm.go:310] 	--control-plane 
	I0813 17:30:43.756282    4162 kubeadm.go:310] 
	I0813 17:30:43.756368    4162 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0813 17:30:43.756375    4162 kubeadm.go:310] 
	I0813 17:30:43.756431    4162 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zkcav3.7ynvfpmi1ev3k3bj \
	I0813 17:30:43.756504    4162 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:94a653d9144e0f51dbf8cb0881c67d995fb93f16972a5a4e4bd9f3c8d4a5aa34 
	I0813 17:30:43.756569    4162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 17:30:43.756578    4162 cni.go:84] Creating CNI manager for ""
	I0813 17:30:43.756587    4162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:30:43.760671    4162 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 17:30:43.768644    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0813 17:30:43.771919    4162 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0813 17:30:43.776811    4162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 17:30:43.776874    4162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 17:30:43.776874    4162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-126000 minikube.k8s.io/updated_at=2024_08_13T17_30_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=running-upgrade-126000 minikube.k8s.io/primary=true
	I0813 17:30:43.829232    4162 ops.go:34] apiserver oom_adj: -16
	I0813 17:30:43.829232    4162 kubeadm.go:1113] duration metric: took 52.408584ms to wait for elevateKubeSystemPrivileges
	I0813 17:30:43.829246    4162 kubeadm.go:394] duration metric: took 4m12.875502459s to StartCluster
	I0813 17:30:43.829257    4162 settings.go:142] acquiring lock: {Name:mkaf11e998595d0fbc8bedb0051c4325b4dc127d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:30:43.829342    4162 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:30:43.829720    4162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/kubeconfig: {Name:mk4f6a628d9f9f6550ed229faba2a879ed685a75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:30:43.830188    4162 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:30:43.830276    4162 config.go:182] Loaded profile config "running-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:30:43.830253    4162 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0813 17:30:43.830292    4162 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-126000"
	I0813 17:30:43.830300    4162 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-126000"
	I0813 17:30:43.830305    4162 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-126000"
	W0813 17:30:43.830308    4162 addons.go:243] addon storage-provisioner should already be in state true
	I0813 17:30:43.830315    4162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-126000"
	I0813 17:30:43.830319    4162 host.go:66] Checking if "running-upgrade-126000" exists ...
	I0813 17:30:43.831254    4162 kapi.go:59] client config for running-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045cbe30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 17:30:43.831375    4162 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-126000"
	W0813 17:30:43.831380    4162 addons.go:243] addon default-storageclass should already be in state true
	I0813 17:30:43.831387    4162 host.go:66] Checking if "running-upgrade-126000" exists ...
	I0813 17:30:43.834456    4162 out.go:177] * Verifying Kubernetes components...
	I0813 17:30:43.834767    4162 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 17:30:43.838721    4162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 17:30:43.838729    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/running-upgrade-126000/id_rsa Username:docker}
	I0813 17:30:43.842381    4162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:30:43.846575    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:30:43.850619    4162 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 17:30:43.850624    4162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 17:30:43.850631    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/running-upgrade-126000/id_rsa Username:docker}
	I0813 17:30:43.946193    4162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 17:30:43.951181    4162 api_server.go:52] waiting for apiserver process to appear ...
	I0813 17:30:43.951237    4162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:30:43.955286    4162 api_server.go:72] duration metric: took 125.087334ms to wait for apiserver process to appear ...
	I0813 17:30:43.955294    4162 api_server.go:88] waiting for apiserver healthz status ...
	I0813 17:30:43.955301    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:43.992680    4162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 17:30:44.020183    4162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 17:30:44.329073    4162 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0813 17:30:44.329086    4162 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0813 17:30:48.957367    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:48.957404    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:53.957671    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:53.957705    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:58.957950    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:58.957997    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:03.958388    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:03.958426    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:08.958978    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:08.959001    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:13.959862    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:13.959884    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0813 17:31:14.331121    4162 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0813 17:31:14.337593    4162 out.go:177] * Enabled addons: storage-provisioner
	I0813 17:31:14.349404    4162 addons.go:510] duration metric: took 30.519589375s for enable addons: enabled=[storage-provisioner]
	I0813 17:31:18.960740    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:18.960760    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:23.961894    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:23.961933    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:28.963524    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:28.963550    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:33.963901    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:33.963922    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:38.965892    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:38.965941    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:43.968111    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:43.968202    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:43.978749    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:31:43.978818    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:43.990253    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:31:43.990331    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:44.001287    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:31:44.001374    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:44.011531    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:31:44.011613    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:44.021707    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:31:44.021775    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:44.032082    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:31:44.032167    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:44.042763    4162 logs.go:276] 0 containers: []
	W0813 17:31:44.042775    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:44.042844    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:44.053240    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:31:44.053255    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:31:44.053261    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:31:44.067823    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:31:44.067834    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:31:44.080166    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:31:44.080178    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:31:44.094656    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:31:44.094666    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:31:44.106869    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:31:44.106880    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:31:44.128617    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:31:44.128628    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:31:44.140150    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:44.140161    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:44.175002    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:44.175012    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:44.209002    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:44.209013    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:44.233582    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:31:44.233590    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:44.245526    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:31:44.245537    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:31:44.256942    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:44.256952    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:44.261590    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:31:44.261595    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:31:46.777526    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:51.778005    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:51.778144    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:51.795488    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:31:51.795599    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:51.808818    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:31:51.808902    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:51.821521    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:31:51.821603    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:51.831519    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:31:51.831591    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:51.841955    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:31:51.842037    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:51.852700    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:31:51.852768    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:51.862686    4162 logs.go:276] 0 containers: []
	W0813 17:31:51.862699    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:51.862760    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:51.873404    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:31:51.873417    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:51.873422    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:51.898789    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:31:51.898798    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:51.912077    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:51.912089    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:51.948129    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:31:51.948138    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:31:51.962468    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:31:51.962478    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:31:51.976342    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:31:51.976354    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:31:51.988140    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:31:51.988151    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:31:52.005338    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:31:52.005350    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:31:52.016829    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:52.016840    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:52.022998    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:52.023005    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:52.061154    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:31:52.061165    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:31:52.072551    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:31:52.072562    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:31:52.087001    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:31:52.087011    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:31:54.601037    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:59.603381    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:59.603724    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:59.638922    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:31:59.639061    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:59.658863    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:31:59.658973    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:59.675657    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:31:59.675730    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:59.687257    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:31:59.687327    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:59.697540    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:31:59.697613    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:59.708208    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:31:59.708283    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:59.719486    4162 logs.go:276] 0 containers: []
	W0813 17:31:59.719500    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:59.719562    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:59.729725    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:31:59.729747    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:59.729753    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:59.734251    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:31:59.734259    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:31:59.748854    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:31:59.748868    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:31:59.760627    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:31:59.760638    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:31:59.776288    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:31:59.776300    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:59.792371    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:59.792383    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:59.827187    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:59.827195    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:59.863712    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:31:59.863723    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:31:59.877539    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:31:59.877549    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:31:59.888774    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:31:59.888785    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:31:59.903668    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:31:59.903679    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:31:59.915641    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:31:59.915652    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:31:59.933588    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:59.933597    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:02.458080    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:07.458862    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:07.459019    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:07.473444    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:07.473535    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:07.484722    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:07.484794    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:07.494625    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:32:07.494701    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:07.505062    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:07.505136    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:07.515536    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:07.515600    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:07.525540    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:07.525614    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:07.535474    4162 logs.go:276] 0 containers: []
	W0813 17:32:07.535489    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:07.535559    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:07.545417    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:07.545435    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:07.545442    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:07.557416    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:07.557426    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:07.571509    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:07.571518    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:07.582712    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:07.582722    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:07.594411    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:07.594422    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:07.612880    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:07.612891    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:07.624204    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:07.624217    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:07.639677    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:07.639688    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:07.663222    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:07.663233    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:07.697463    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:07.697474    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:07.702488    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:07.702494    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:07.736795    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:07.736806    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:07.752680    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:07.752691    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:10.265942    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:15.268148    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:15.268409    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:15.290186    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:15.290307    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:15.307500    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:15.307600    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:15.320840    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:32:15.320927    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:15.332175    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:15.332259    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:15.342865    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:15.342956    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:15.353298    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:15.353373    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:15.363606    4162 logs.go:276] 0 containers: []
	W0813 17:32:15.363618    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:15.363690    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:15.374078    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:15.374091    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:15.374099    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:15.387221    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:15.387233    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:15.406901    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:15.406912    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:15.418792    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:15.418807    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:15.434307    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:15.434320    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:15.446485    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:15.446496    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:15.481564    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:15.481575    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:15.496786    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:15.496799    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:15.510574    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:15.510584    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:15.522128    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:15.522138    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:15.540120    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:15.540129    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:15.563187    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:15.563195    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:15.567540    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:15.567548    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:18.107287    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:23.109498    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:23.109595    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:23.121633    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:23.121722    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:23.132197    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:23.132270    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:23.142378    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:32:23.142457    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:23.155616    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:23.155688    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:23.166122    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:23.166197    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:23.176766    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:23.176832    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:23.186939    4162 logs.go:276] 0 containers: []
	W0813 17:32:23.186951    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:23.187023    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:23.199376    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:23.199391    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:23.199396    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:23.235056    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:23.235068    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:23.271030    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:23.271042    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:23.283002    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:23.283015    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:23.297360    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:23.297371    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:23.308824    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:23.308835    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:23.326651    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:23.326662    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:23.339423    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:23.339434    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:23.344011    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:23.344018    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:23.359071    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:23.359082    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:23.374125    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:23.374137    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:23.385910    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:23.385921    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:23.405153    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:23.405164    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:25.930937    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:30.930996    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:30.931403    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:30.962172    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:30.962310    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:30.980846    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:30.980955    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:30.995353    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:32:30.995438    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:31.007049    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:31.007124    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:31.017135    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:31.017211    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:31.027300    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:31.027375    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:31.037675    4162 logs.go:276] 0 containers: []
	W0813 17:32:31.037688    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:31.037750    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:31.048187    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:31.048201    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:31.048205    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:31.083509    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:31.083517    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:31.098521    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:31.098530    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:31.110844    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:31.110856    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:31.125383    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:31.125392    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:31.137994    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:31.138005    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:31.149257    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:31.149267    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:31.173111    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:31.173120    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:31.184592    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:31.184602    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:31.189150    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:31.189156    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:31.224320    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:31.224332    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:31.239259    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:31.239269    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:31.253687    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:31.253696    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:33.776188    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:38.776472    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:38.776607    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:38.798639    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:38.798720    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:38.816543    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:38.816616    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:38.827969    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:32:38.828041    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:38.838639    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:38.838707    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:38.849833    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:38.849891    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:38.861230    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:38.861309    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:38.872663    4162 logs.go:276] 0 containers: []
	W0813 17:32:38.872685    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:38.872803    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:38.883752    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:38.883766    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:38.883771    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:38.899886    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:38.899898    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:38.912942    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:38.912953    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:38.932490    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:38.932504    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:38.958315    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:38.958329    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:38.993881    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:38.993892    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:39.008117    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:39.008128    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:39.022381    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:39.022392    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:39.033533    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:39.033544    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:39.045102    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:39.045112    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:39.049568    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:39.049574    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:39.085284    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:39.085294    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:39.097045    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:39.097056    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:41.614436    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:46.615440    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:46.615731    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:46.644829    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:46.644973    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:46.668028    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:46.668114    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:46.681445    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:32:46.681527    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:46.693433    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:46.693505    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:46.704241    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:46.704317    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:46.723944    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:46.724026    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:46.734135    4162 logs.go:276] 0 containers: []
	W0813 17:32:46.734148    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:46.734215    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:46.744193    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:46.744212    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:46.744218    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:46.763259    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:46.763270    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:46.778037    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:46.778048    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:46.789934    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:46.789945    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:46.804485    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:46.804495    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:46.816196    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:46.816207    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:46.849170    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:32:46.849179    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:32:46.860144    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:32:46.860154    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:32:46.871304    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:46.871318    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:46.883189    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:46.883199    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:46.895945    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:46.895956    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:46.913760    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:46.913772    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:46.925346    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:46.925355    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:46.930219    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:46.930226    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:46.965290    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:46.965302    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:49.492302    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:54.493827    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:54.493947    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:54.506674    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:54.506755    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:54.518741    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:54.518833    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:54.530080    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:32:54.530155    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:54.540653    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:54.540739    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:54.552511    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:54.552585    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:54.564093    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:54.564169    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:54.574873    4162 logs.go:276] 0 containers: []
	W0813 17:32:54.574889    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:54.574954    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:54.585888    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:54.585908    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:54.585913    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:54.620135    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:54.620146    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:54.655004    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:32:54.655015    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:32:54.668289    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:54.668302    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:54.683420    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:54.683431    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:54.699070    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:54.699081    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:54.712776    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:32:54.712787    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:32:54.724775    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:54.724785    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:54.749880    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:54.749892    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:54.761354    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:54.761367    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:54.765980    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:54.765988    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:54.778003    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:54.778014    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:54.792244    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:54.792255    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:54.807061    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:54.807072    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:54.828817    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:54.828829    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:57.342705    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:02.344532    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:02.344796    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:02.374622    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:02.374748    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:02.392774    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:02.392888    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:02.407417    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:02.407504    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:02.419476    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:02.419547    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:02.430440    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:02.430517    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:02.441393    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:02.441474    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:02.453355    4162 logs.go:276] 0 containers: []
	W0813 17:33:02.453369    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:02.453434    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:02.464515    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:02.464536    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:02.464542    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:02.478731    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:02.478741    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:02.491142    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:02.491154    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:02.495602    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:02.495608    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:02.506889    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:02.506900    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:02.523756    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:02.523765    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:02.549216    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:02.549225    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:02.560817    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:02.560828    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:02.596486    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:02.596496    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:02.608347    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:02.608360    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:02.623527    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:02.623537    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:02.634918    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:02.634929    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:02.670129    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:02.670140    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:02.689669    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:02.689680    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:02.702147    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:02.702159    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:05.216093    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:10.218364    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:10.218567    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:10.237173    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:10.237257    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:10.251823    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:10.251904    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:10.263603    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:10.263679    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:10.274400    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:10.274476    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:10.287172    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:10.287240    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:10.297581    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:10.297657    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:10.307862    4162 logs.go:276] 0 containers: []
	W0813 17:33:10.307876    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:10.307943    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:10.318531    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:10.318549    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:10.318554    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:10.352085    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:10.352094    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:10.365709    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:10.365721    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:10.377669    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:10.377678    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:10.392410    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:10.392420    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:10.403785    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:10.403793    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:10.417402    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:10.417412    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:10.454030    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:10.454041    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:10.468347    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:10.468358    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:10.481917    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:10.481929    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:10.502676    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:10.502692    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:10.509476    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:10.509486    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:10.520415    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:10.520426    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:10.532824    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:10.532835    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:10.544111    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:10.544122    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:13.071407    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:18.071955    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:18.072125    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:18.086726    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:18.086815    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:18.098485    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:18.098554    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:18.109218    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:18.109307    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:18.119943    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:18.120018    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:18.130296    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:18.130367    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:18.140779    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:18.140853    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:18.150446    4162 logs.go:276] 0 containers: []
	W0813 17:33:18.150456    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:18.150516    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:18.160832    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:18.160849    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:18.160854    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:18.175260    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:18.175271    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:18.187210    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:18.187221    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:18.205266    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:18.205276    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:18.217592    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:18.217601    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:18.228857    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:18.228869    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:18.233187    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:18.233193    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:18.268298    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:18.268309    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:18.280239    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:18.280252    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:18.297176    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:18.297187    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:18.330265    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:18.330274    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:18.343841    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:18.343852    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:18.355695    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:18.355706    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:18.368671    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:18.368682    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:18.392153    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:18.392162    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:20.905759    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:25.907874    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:25.908065    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:25.925156    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:25.925256    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:25.938118    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:25.938200    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:25.949990    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:25.950072    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:25.960227    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:25.960305    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:25.971229    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:25.971303    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:25.981488    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:25.981557    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:25.991480    4162 logs.go:276] 0 containers: []
	W0813 17:33:25.991496    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:25.991565    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:26.001994    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:26.002011    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:26.002016    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:26.014107    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:26.014118    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:26.025472    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:26.025482    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:26.036923    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:26.036933    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:26.041152    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:26.041158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:26.057523    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:26.057534    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:26.072976    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:26.072986    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:26.096487    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:26.096496    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:26.110847    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:26.110858    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:26.123115    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:26.123125    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:26.134967    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:26.134977    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:26.149734    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:26.149744    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:26.168150    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:26.168161    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:26.203788    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:26.203798    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:26.223334    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:26.223344    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:28.760900    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:33.763085    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:33.763210    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:33.776038    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:33.776129    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:33.787443    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:33.787521    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:33.798175    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:33.798252    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:33.813850    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:33.813930    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:33.824367    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:33.824441    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:33.834944    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:33.835021    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:33.845658    4162 logs.go:276] 0 containers: []
	W0813 17:33:33.845669    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:33.845728    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:33.856285    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:33.856305    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:33.856310    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:33.860758    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:33.860764    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:33.896060    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:33.896071    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:33.909982    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:33.909993    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:33.922057    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:33.922067    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:33.933831    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:33.933841    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:33.945595    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:33.945604    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:33.957032    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:33.957043    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:33.968556    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:33.968567    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:33.989508    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:33.989519    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:34.006671    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:34.006681    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:34.030712    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:34.030719    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:34.064555    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:34.064564    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:34.076072    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:34.076083    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:34.087390    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:34.087402    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:36.609029    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:41.609518    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:41.609752    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:41.635365    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:41.635502    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:41.652769    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:41.652863    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:41.668640    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:41.668731    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:41.684534    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:41.684606    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:41.708169    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:41.708246    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:41.721890    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:41.721982    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:41.741086    4162 logs.go:276] 0 containers: []
	W0813 17:33:41.741098    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:41.741162    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:41.757219    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:41.757236    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:41.757241    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:41.771729    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:41.771740    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:41.789784    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:41.789795    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:41.801801    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:41.801812    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:41.835239    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:41.835247    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:41.871005    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:41.871017    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:41.883564    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:41.883577    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:41.899269    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:41.899280    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:41.911192    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:41.911203    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:41.915981    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:41.915989    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:41.931995    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:41.932006    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:41.947384    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:41.947396    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:41.958834    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:41.958844    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:41.983843    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:41.983853    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:41.996327    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:41.996338    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:44.509981    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:49.512285    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:49.512575    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:49.546610    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:49.546767    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:49.565175    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:49.565285    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:49.579602    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:49.579684    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:49.593408    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:49.593487    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:49.604351    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:49.604432    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:49.615377    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:49.615449    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:49.626120    4162 logs.go:276] 0 containers: []
	W0813 17:33:49.626132    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:49.626197    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:49.636316    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:49.636334    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:49.636339    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:49.673087    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:49.673102    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:49.687616    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:49.687626    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:49.705823    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:49.705834    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:49.710646    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:49.710652    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:49.726759    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:49.726771    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:49.744436    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:49.744447    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:49.755850    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:49.755860    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:49.779436    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:49.779449    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:49.792192    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:49.792205    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:49.809874    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:49.809885    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:49.824829    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:49.824841    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:49.859116    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:49.859125    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:49.876966    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:49.876976    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:49.888285    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:49.888296    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:52.402010    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:57.404508    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:57.404628    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:57.416603    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:57.416680    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:57.427948    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:57.428026    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:57.439425    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:57.439506    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:57.450183    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:57.450255    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:57.460763    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:57.460838    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:57.471246    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:57.471318    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:57.481378    4162 logs.go:276] 0 containers: []
	W0813 17:33:57.481391    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:57.481449    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:57.492164    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:57.492181    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:57.492186    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:57.504412    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:57.504423    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:57.516878    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:57.516889    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:57.528968    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:57.528979    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:57.540741    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:57.540752    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:57.558192    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:57.558202    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:57.594290    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:57.594302    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:57.599425    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:57.599432    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:57.622919    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:57.622926    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:57.635364    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:57.635374    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:57.647377    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:57.647387    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:57.661242    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:57.661253    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:57.676873    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:57.676884    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:57.713355    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:57.713369    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:57.728110    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:57.728121    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:00.253206    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:05.255339    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:05.255461    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:34:05.267517    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:34:05.267594    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:34:05.279133    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:34:05.279204    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:34:05.290468    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:34:05.290540    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:34:05.300954    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:34:05.301037    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:34:05.311623    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:34:05.311708    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:34:05.323055    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:34:05.323129    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:34:05.333788    4162 logs.go:276] 0 containers: []
	W0813 17:34:05.333800    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:34:05.333862    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:34:05.347016    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:34:05.347034    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:34:05.347039    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:34:05.358819    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:34:05.358830    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:34:05.393417    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:34:05.393427    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:34:05.405574    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:34:05.405585    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:34:05.430346    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:34:05.430355    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:34:05.464178    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:34:05.464190    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:34:05.475967    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:34:05.475978    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:34:05.487839    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:34:05.487851    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:34:05.503076    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:34:05.503086    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:34:05.520451    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:34:05.520464    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:34:05.536894    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:34:05.536905    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:34:05.541432    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:34:05.541440    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:34:05.555579    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:34:05.555589    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:34:05.567603    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:34:05.567614    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:05.582173    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:34:05.582184    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:34:08.104392    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:13.106507    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:13.106634    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:34:13.117900    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:34:13.117991    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:34:13.129149    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:34:13.129233    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:34:13.140102    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:34:13.140184    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:34:13.150775    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:34:13.150848    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:34:13.161325    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:34:13.161401    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:34:13.172193    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:34:13.172264    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:34:13.182729    4162 logs.go:276] 0 containers: []
	W0813 17:34:13.182742    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:34:13.182819    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:34:13.194883    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:34:13.194901    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:34:13.194907    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:34:13.207003    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:34:13.207014    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:34:13.224406    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:34:13.224415    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:34:13.249884    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:34:13.249899    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:34:13.285545    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:34:13.285559    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:34:13.300001    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:34:13.300014    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:34:13.312095    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:34:13.312109    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:34:13.324351    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:34:13.324362    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:34:13.336086    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:34:13.336096    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:34:13.347930    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:34:13.347940    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:34:13.383039    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:34:13.383049    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:34:13.388082    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:34:13.388091    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:13.402484    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:34:13.402495    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:34:13.417863    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:34:13.417873    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:34:13.429062    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:34:13.429073    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:34:15.942730    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:20.942829    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:20.942923    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:34:20.953444    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:34:20.953512    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:34:20.964097    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:34:20.964180    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:34:20.975364    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:34:20.975434    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:34:20.986691    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:34:20.986771    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:34:20.997277    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:34:20.997352    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:34:21.008142    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:34:21.008219    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:34:21.018189    4162 logs.go:276] 0 containers: []
	W0813 17:34:21.018199    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:34:21.018255    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:34:21.028964    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:34:21.028979    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:34:21.028984    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:34:21.033625    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:34:21.033633    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:34:21.048348    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:34:21.048358    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:21.062518    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:34:21.062528    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:34:21.074437    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:34:21.074448    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:34:21.086469    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:34:21.086481    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:34:21.098237    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:34:21.098248    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:34:21.112673    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:34:21.112684    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:34:21.124187    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:34:21.124198    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:34:21.135768    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:34:21.135780    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:34:21.171168    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:34:21.171180    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:34:21.189181    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:34:21.189192    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:34:21.200517    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:34:21.200530    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:34:21.235238    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:34:21.235255    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:34:21.246826    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:34:21.246838    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:34:23.771062    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:28.773285    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:28.773444    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:34:28.789793    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:34:28.789891    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:34:28.802966    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:34:28.803038    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:34:28.814757    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:34:28.814828    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:34:28.825058    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:34:28.825135    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:34:28.835376    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:34:28.835447    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:34:28.845860    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:34:28.845926    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:34:28.855548    4162 logs.go:276] 0 containers: []
	W0813 17:34:28.855558    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:34:28.855612    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:34:28.866034    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:34:28.866053    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:34:28.866059    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:34:28.900716    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:34:28.900728    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:34:28.912441    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:34:28.912451    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:34:28.924487    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:34:28.924501    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:34:28.937085    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:34:28.937100    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:34:28.948351    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:34:28.948362    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:34:28.962558    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:34:28.962569    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:34:28.984177    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:34:28.984187    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:34:28.999652    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:34:28.999662    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:34:29.011231    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:34:29.011243    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:34:29.047034    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:34:29.047045    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:34:29.065363    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:34:29.065372    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:34:29.069867    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:34:29.069875    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:29.085530    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:34:29.085541    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:34:29.102502    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:34:29.102513    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:34:31.628902    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:36.631021    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:36.631123    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:34:36.645846    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:34:36.645933    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:34:36.656874    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:34:36.656963    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:34:36.668602    4162 logs.go:276] 4 containers: [48dc668317e4 ebb5807747c3 edc79ce83d8a 7e4d0301e234]
	I0813 17:34:36.668680    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:34:36.678885    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:34:36.678959    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:34:36.689447    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:34:36.689527    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:34:36.699707    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:34:36.699778    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:34:36.710127    4162 logs.go:276] 0 containers: []
	W0813 17:34:36.710138    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:34:36.710200    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:34:36.720686    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:34:36.720703    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:34:36.720709    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:34:36.732487    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:34:36.732497    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:34:36.744556    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:34:36.744568    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:34:36.758729    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:34:36.758739    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:34:36.770330    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:34:36.770340    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:34:36.782058    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:34:36.782069    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:34:36.806655    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:34:36.806665    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:34:36.841611    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:34:36.841620    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:36.855672    4162 logs.go:123] Gathering logs for coredns [ebb5807747c3] ...
	I0813 17:34:36.855682    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb5807747c3"
	I0813 17:34:36.867462    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:34:36.867476    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:34:36.872256    4162 logs.go:123] Gathering logs for coredns [48dc668317e4] ...
	I0813 17:34:36.872263    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dc668317e4"
	I0813 17:34:36.883553    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:34:36.883563    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:34:36.894845    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:34:36.894855    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:34:36.909764    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:34:36.909773    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:34:36.927093    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:34:36.927102    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:34:39.464956    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:44.467144    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:44.471733    4162 out.go:177] 
	W0813 17:34:44.475604    4162 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0813 17:34:44.475616    4162 out.go:239] * 
	* 
	W0813 17:34:44.476315    4162 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:34:44.487603    4162 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-126000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-13 17:34:44.587677 -0700 PDT m=+2924.043699417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-126000 -n running-upgrade-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-126000 -n running-upgrade-126000: exit status 2 (15.703039625s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-126000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-365000          | force-systemd-flag-365000 | jenkins | v1.33.1 | 13 Aug 24 17:24 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-815000              | force-systemd-env-815000  | jenkins | v1.33.1 | 13 Aug 24 17:24 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-815000           | force-systemd-env-815000  | jenkins | v1.33.1 | 13 Aug 24 17:24 PDT | 13 Aug 24 17:24 PDT |
	| start   | -p docker-flags-903000                | docker-flags-903000       | jenkins | v1.33.1 | 13 Aug 24 17:24 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-365000             | force-systemd-flag-365000 | jenkins | v1.33.1 | 13 Aug 24 17:24 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-365000          | force-systemd-flag-365000 | jenkins | v1.33.1 | 13 Aug 24 17:24 PDT | 13 Aug 24 17:24 PDT |
	| start   | -p cert-expiration-967000             | cert-expiration-967000    | jenkins | v1.33.1 | 13 Aug 24 17:24 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-903000 ssh               | docker-flags-903000       | jenkins | v1.33.1 | 13 Aug 24 17:25 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-903000 ssh               | docker-flags-903000       | jenkins | v1.33.1 | 13 Aug 24 17:25 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-903000                | docker-flags-903000       | jenkins | v1.33.1 | 13 Aug 24 17:25 PDT | 13 Aug 24 17:25 PDT |
	| start   | -p cert-options-114000                | cert-options-114000       | jenkins | v1.33.1 | 13 Aug 24 17:25 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-114000 ssh               | cert-options-114000       | jenkins | v1.33.1 | 13 Aug 24 17:25 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-114000 -- sudo        | cert-options-114000       | jenkins | v1.33.1 | 13 Aug 24 17:25 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-114000                | cert-options-114000       | jenkins | v1.33.1 | 13 Aug 24 17:25 PDT | 13 Aug 24 17:25 PDT |
	| start   | -p running-upgrade-126000             | minikube                  | jenkins | v1.26.0 | 13 Aug 24 17:25 PDT | 13 Aug 24 17:26 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-126000             | running-upgrade-126000    | jenkins | v1.33.1 | 13 Aug 24 17:26 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-967000             | cert-expiration-967000    | jenkins | v1.33.1 | 13 Aug 24 17:28 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-967000             | cert-expiration-967000    | jenkins | v1.33.1 | 13 Aug 24 17:28 PDT | 13 Aug 24 17:28 PDT |
	| start   | -p kubernetes-upgrade-397000          | kubernetes-upgrade-397000 | jenkins | v1.33.1 | 13 Aug 24 17:28 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-397000          | kubernetes-upgrade-397000 | jenkins | v1.33.1 | 13 Aug 24 17:28 PDT | 13 Aug 24 17:28 PDT |
	| start   | -p kubernetes-upgrade-397000          | kubernetes-upgrade-397000 | jenkins | v1.33.1 | 13 Aug 24 17:28 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-397000          | kubernetes-upgrade-397000 | jenkins | v1.33.1 | 13 Aug 24 17:28 PDT | 13 Aug 24 17:28 PDT |
	| start   | -p stopped-upgrade-967000             | minikube                  | jenkins | v1.26.0 | 13 Aug 24 17:28 PDT | 13 Aug 24 17:29 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-967000 stop           | minikube                  | jenkins | v1.26.0 | 13 Aug 24 17:29 PDT | 13 Aug 24 17:29 PDT |
	| start   | -p stopped-upgrade-967000             | stopped-upgrade-967000    | jenkins | v1.33.1 | 13 Aug 24 17:29 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/13 17:29:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 17:29:23.004685    4376 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:29:23.004859    4376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:29:23.004864    4376 out.go:304] Setting ErrFile to fd 2...
	I0813 17:29:23.004867    4376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:29:23.005034    4376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:29:23.006351    4376 out.go:298] Setting JSON to false
	I0813 17:29:23.026298    4376 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3527,"bootTime":1723591836,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:29:23.026370    4376 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:29:23.031189    4376 out.go:177] * [stopped-upgrade-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:29:23.038298    4376 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:29:23.038335    4376 notify.go:220] Checking for updates...
	I0813 17:29:23.044259    4376 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:29:23.047302    4376 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:29:23.050239    4376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:29:23.053260    4376 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:29:23.056281    4376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:29:23.057945    4376 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:29:23.061245    4376 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0813 17:29:23.064297    4376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:29:23.068135    4376 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:29:23.075288    4376 start.go:297] selected driver: qemu2
	I0813 17:29:23.075296    4376 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50478 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0813 17:29:23.075355    4376 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:29:23.077785    4376 cni.go:84] Creating CNI manager for ""
	I0813 17:29:23.077803    4376 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:29:23.077832    4376 start.go:340] cluster config:
	{Name:stopped-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50478 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0813 17:29:23.077890    4376 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:29:23.085240    4376 out.go:177] * Starting "stopped-upgrade-967000" primary control-plane node in "stopped-upgrade-967000" cluster
	I0813 17:29:23.089267    4376 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0813 17:29:23.089283    4376 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0813 17:29:23.089289    4376 cache.go:56] Caching tarball of preloaded images
	I0813 17:29:23.089348    4376 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:29:23.089354    4376 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0813 17:29:23.089407    4376 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/config.json ...
	I0813 17:29:23.089805    4376 start.go:360] acquireMachinesLock for stopped-upgrade-967000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:29:23.089841    4376 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "stopped-upgrade-967000"
	I0813 17:29:23.089852    4376 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:29:23.089857    4376 fix.go:54] fixHost starting: 
	I0813 17:29:23.089976    4376 fix.go:112] recreateIfNeeded on stopped-upgrade-967000: state=Stopped err=<nil>
	W0813 17:29:23.089985    4376 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:29:23.094245    4376 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-967000" ...
	I0813 17:29:24.302381    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:24.302832    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:24.344489    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:24.344616    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:24.368264    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:24.368367    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:24.382924    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:24.383017    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:24.395330    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:24.395406    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:24.406288    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:24.406350    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:24.416827    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:24.416892    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:24.431580    4162 logs.go:276] 0 containers: []
	W0813 17:29:24.431592    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:24.431656    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:24.442009    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:24.442024    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:24.442029    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:24.456120    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:24.456131    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:24.467782    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:24.467797    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:24.479075    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:24.479089    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:24.490566    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:24.490579    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:24.505105    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:24.505115    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:24.520390    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:24.520405    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:24.531727    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:24.531738    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:24.549235    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:24.549248    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:24.560072    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:24.560084    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:24.599274    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:24.599286    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:24.621498    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:24.621511    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:24.646099    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:24.646106    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:24.685283    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:24.685294    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:24.689519    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:24.689527    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:24.708995    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:24.709006    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:24.725992    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:24.726005    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:23.102239    4376 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:29:23.102312    4376 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50444-:22,hostfwd=tcp::50445-:2376,hostname=stopped-upgrade-967000 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/disk.qcow2
	I0813 17:29:23.146801    4376 main.go:141] libmachine: STDOUT: 
	I0813 17:29:23.146822    4376 main.go:141] libmachine: STDERR: 
	I0813 17:29:23.146828    4376 main.go:141] libmachine: Waiting for VM to start (ssh -p 50444 docker@127.0.0.1)...
	I0813 17:29:27.239269    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:32.241896    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:32.242075    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:32.253408    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:32.253481    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:32.264374    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:32.264447    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:32.274887    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:32.274954    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:32.285449    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:32.285519    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:32.296119    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:32.296181    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:32.306843    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:32.306915    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:32.324805    4162 logs.go:276] 0 containers: []
	W0813 17:29:32.324818    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:32.324878    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:32.337713    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:32.337735    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:32.337740    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:32.351976    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:32.351986    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:32.369863    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:32.369873    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:32.381638    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:32.381649    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:32.393126    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:32.393137    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:32.406019    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:32.406030    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:32.410816    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:32.410822    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:32.455611    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:32.455622    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:32.476148    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:32.476158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:32.490924    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:32.490937    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:32.502687    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:32.502698    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:32.514562    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:32.514572    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:32.537818    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:32.537826    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:32.576017    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:32.576028    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:32.593095    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:32.593104    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:32.604528    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:32.604539    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:32.619494    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:32.619504    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:35.133579    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:40.135777    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:40.136102    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:40.163657    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:40.163768    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:40.181578    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:40.181671    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:40.194976    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:40.195050    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:40.207171    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:40.207237    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:40.217273    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:40.217328    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:40.228006    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:40.228071    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:40.238460    4162 logs.go:276] 0 containers: []
	W0813 17:29:40.238472    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:40.238519    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:40.249164    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:40.249179    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:40.249184    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:40.283887    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:40.283903    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:40.297925    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:40.297937    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:40.323737    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:40.323748    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:40.337938    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:40.337949    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:40.349534    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:40.349543    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:40.361680    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:40.361692    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:40.379665    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:40.379673    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:40.418335    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:40.418344    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:40.422901    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:40.422908    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:40.434147    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:40.434158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:40.445995    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:40.446007    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:40.457668    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:40.457679    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:40.469422    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:40.469434    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:40.487026    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:40.487035    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:40.501220    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:40.501231    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:40.513006    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:40.513019    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:43.038313    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:43.066718    4376 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/config.json ...
	I0813 17:29:43.067533    4376 machine.go:94] provisionDockerMachine start ...
	I0813 17:29:43.067711    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.068193    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.068206    4376 main.go:141] libmachine: About to run SSH command:
	hostname
	I0813 17:29:43.144083    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0813 17:29:43.144109    4376 buildroot.go:166] provisioning hostname "stopped-upgrade-967000"
	I0813 17:29:43.144196    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.144404    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.144416    4376 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-967000 && echo "stopped-upgrade-967000" | sudo tee /etc/hostname
	I0813 17:29:43.214897    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-967000
	
	I0813 17:29:43.214960    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.215085    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.215095    4376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-967000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-967000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-967000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 17:29:43.276002    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0813 17:29:43.276015    4376 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19429-1127/.minikube CaCertPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19429-1127/.minikube}
	I0813 17:29:43.276024    4376 buildroot.go:174] setting up certificates
	I0813 17:29:43.276028    4376 provision.go:84] configureAuth start
	I0813 17:29:43.276034    4376 provision.go:143] copyHostCerts
	I0813 17:29:43.276134    4376 exec_runner.go:144] found /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.pem, removing ...
	I0813 17:29:43.276141    4376 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.pem
	I0813 17:29:43.276269    4376 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.pem (1082 bytes)
	I0813 17:29:43.276473    4376 exec_runner.go:144] found /Users/jenkins/minikube-integration/19429-1127/.minikube/cert.pem, removing ...
	I0813 17:29:43.276480    4376 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19429-1127/.minikube/cert.pem
	I0813 17:29:43.276533    4376 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19429-1127/.minikube/cert.pem (1123 bytes)
	I0813 17:29:43.276650    4376 exec_runner.go:144] found /Users/jenkins/minikube-integration/19429-1127/.minikube/key.pem, removing ...
	I0813 17:29:43.276653    4376 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19429-1127/.minikube/key.pem
	I0813 17:29:43.276701    4376 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19429-1127/.minikube/key.pem (1675 bytes)
	I0813 17:29:43.276783    4376 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-967000 san=[127.0.0.1 localhost minikube stopped-upgrade-967000]
	I0813 17:29:43.321743    4376 provision.go:177] copyRemoteCerts
	I0813 17:29:43.321771    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 17:29:43.321778    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	I0813 17:29:43.350949    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 17:29:43.357927    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0813 17:29:43.365107    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 17:29:43.371776    4376 provision.go:87] duration metric: took 95.737583ms to configureAuth
	I0813 17:29:43.371784    4376 buildroot.go:189] setting minikube options for container-runtime
	I0813 17:29:43.371878    4376 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:29:43.371915    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.372004    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.372009    4376 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0813 17:29:43.430095    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0813 17:29:43.430105    4376 buildroot.go:70] root file system type: tmpfs
	I0813 17:29:43.430180    4376 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0813 17:29:43.430224    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.430355    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.430391    4376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0813 17:29:43.491524    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0813 17:29:43.491572    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.491681    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.491689    4376 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0813 17:29:43.854219    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0813 17:29:43.854232    4376 machine.go:97] duration metric: took 786.698792ms to provisionDockerMachine
	I0813 17:29:43.854239    4376 start.go:293] postStartSetup for "stopped-upgrade-967000" (driver="qemu2")
	I0813 17:29:43.854246    4376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 17:29:43.854313    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 17:29:43.854323    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	I0813 17:29:43.885571    4376 ssh_runner.go:195] Run: cat /etc/os-release
	I0813 17:29:43.886904    4376 info.go:137] Remote host: Buildroot 2021.02.12
	I0813 17:29:43.886912    4376 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19429-1127/.minikube/addons for local assets ...
	I0813 17:29:43.887011    4376 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19429-1127/.minikube/files for local assets ...
	I0813 17:29:43.887128    4376 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem -> 16352.pem in /etc/ssl/certs
	I0813 17:29:43.887281    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0813 17:29:43.890059    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem --> /etc/ssl/certs/16352.pem (1708 bytes)
	I0813 17:29:43.897242    4376 start.go:296] duration metric: took 42.997833ms for postStartSetup
	I0813 17:29:43.897255    4376 fix.go:56] duration metric: took 20.807701042s for fixHost
	I0813 17:29:43.897291    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.897402    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.897410    4376 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 17:29:43.953506    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723595384.036107046
	
	I0813 17:29:43.953514    4376 fix.go:216] guest clock: 1723595384.036107046
	I0813 17:29:43.953518    4376 fix.go:229] Guest: 2024-08-13 17:29:44.036107046 -0700 PDT Remote: 2024-08-13 17:29:43.897257 -0700 PDT m=+20.923896168 (delta=138.850046ms)
	I0813 17:29:43.953534    4376 fix.go:200] guest clock delta is within tolerance: 138.850046ms
	I0813 17:29:43.953539    4376 start.go:83] releasing machines lock for "stopped-upgrade-967000", held for 20.863995084s
	I0813 17:29:43.953605    4376 ssh_runner.go:195] Run: cat /version.json
	I0813 17:29:43.953616    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	I0813 17:29:43.953636    4376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0813 17:29:43.953655    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	W0813 17:29:43.954195    4376 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50444: connect: connection refused
	I0813 17:29:43.954219    4376 retry.go:31] will retry after 320.897532ms: dial tcp [::1]:50444: connect: connection refused
	W0813 17:29:44.333113    4376 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0813 17:29:44.333284    4376 ssh_runner.go:195] Run: systemctl --version
	I0813 17:29:44.336765    4376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0813 17:29:44.340263    4376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0813 17:29:44.340315    4376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0813 17:29:44.345428    4376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0813 17:29:44.354282    4376 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0813 17:29:44.354298    4376 start.go:495] detecting cgroup driver to use...
	I0813 17:29:44.354426    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 17:29:44.366837    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0813 17:29:44.370798    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0813 17:29:44.375605    4376 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0813 17:29:44.375658    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0813 17:29:44.379710    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0813 17:29:44.383608    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0813 17:29:44.388638    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0813 17:29:44.391645    4376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0813 17:29:44.394420    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0813 17:29:44.397763    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0813 17:29:44.401122    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0813 17:29:44.404026    4376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 17:29:44.406425    4376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 17:29:44.409372    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:44.490857    4376 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0813 17:29:44.501468    4376 start.go:495] detecting cgroup driver to use...
	I0813 17:29:44.501537    4376 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0813 17:29:44.507884    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0813 17:29:44.512216    4376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0813 17:29:44.517684    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0813 17:29:44.522762    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0813 17:29:44.527499    4376 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0813 17:29:44.585665    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0813 17:29:44.591649    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 17:29:44.597300    4376 ssh_runner.go:195] Run: which cri-dockerd
	I0813 17:29:44.598620    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0813 17:29:44.601570    4376 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0813 17:29:44.606746    4376 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0813 17:29:44.692971    4376 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0813 17:29:44.765502    4376 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0813 17:29:44.765553    4376 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0813 17:29:44.771060    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:44.841908    4376 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0813 17:29:45.964899    4376 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.122990209s)
	I0813 17:29:45.964966    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0813 17:29:45.969830    4376 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0813 17:29:45.976466    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0813 17:29:45.981035    4376 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0813 17:29:46.058013    4376 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0813 17:29:46.134622    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:46.197821    4376 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0813 17:29:46.203788    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0813 17:29:46.208155    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:46.287531    4376 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0813 17:29:46.324678    4376 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0813 17:29:46.324767    4376 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0813 17:29:46.326976    4376 start.go:563] Will wait 60s for crictl version
	I0813 17:29:46.327027    4376 ssh_runner.go:195] Run: which crictl
	I0813 17:29:46.328854    4376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0813 17:29:46.343021    4376 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0813 17:29:46.343083    4376 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0813 17:29:46.359959    4376 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0813 17:29:46.384164    4376 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0813 17:29:46.384234    4376 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0813 17:29:46.385471    4376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 17:29:46.389776    4376 kubeadm.go:883] updating cluster {Name:stopped-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50478 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0813 17:29:46.389819    4376 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0813 17:29:46.389866    4376 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0813 17:29:46.399994    4376 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0813 17:29:46.400004    4376 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0813 17:29:46.400047    4376 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0813 17:29:46.403041    4376 ssh_runner.go:195] Run: which lz4
	I0813 17:29:46.404415    4376 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 17:29:46.405584    4376 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0813 17:29:46.405594    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0813 17:29:47.311636    4376 docker.go:649] duration metric: took 907.261792ms to copy over tarball
	I0813 17:29:47.311687    4376 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 17:29:48.041047    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:48.041171    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:48.052849    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:48.052934    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:48.065403    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:48.065492    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:48.077565    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:48.077648    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:48.096413    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:48.096487    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:48.108344    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:48.108418    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:48.120901    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:48.120974    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:48.132257    4162 logs.go:276] 0 containers: []
	W0813 17:29:48.132271    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:48.132335    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:48.144900    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:48.144919    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:48.144924    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:48.166976    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:48.166992    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:48.183538    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:48.183550    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:48.196318    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:48.196332    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:48.212389    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:48.212403    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:48.228945    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:48.228957    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:48.247917    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:48.247932    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:48.260700    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:48.260711    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:48.273594    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:48.273608    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:48.318606    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:48.318626    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:48.323792    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:48.323805    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:48.336916    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:48.336929    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:48.350223    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:48.350237    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:48.390100    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:48.390112    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:48.408353    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:48.408370    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:48.428262    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:48.428275    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:48.441620    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:48.441632    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:50.969214    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:48.475708    4376 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.164026208s)
	I0813 17:29:48.475720    4376 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0813 17:29:48.491116    4376 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0813 17:29:48.494155    4376 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0813 17:29:48.499541    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:48.577609    4376 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0813 17:29:50.165507    4376 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.587904042s)
	I0813 17:29:50.165611    4376 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0813 17:29:50.180203    4376 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0813 17:29:50.180211    4376 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0813 17:29:50.180217    4376 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0813 17:29:50.184497    4376 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:29:50.186486    4376 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:29:50.188436    4376 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:29:50.188559    4376 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:29:50.190312    4376 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:29:50.190353    4376 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:29:50.191796    4376 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:29:50.191800    4376 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:29:50.193097    4376 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:29:50.193211    4376 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:29:50.194343    4376 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:29:50.194354    4376 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0813 17:29:50.195725    4376 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:29:50.195859    4376 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:29:50.197098    4376 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0813 17:29:50.197608    4376 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:29:50.636414    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:29:50.644153    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:29:50.652131    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:29:50.654596    4376 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0813 17:29:50.654607    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:29:50.654621    4376 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:29:50.654643    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:29:50.664044    4376 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0813 17:29:50.664070    4376 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:29:50.664125    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:29:50.676840    4376 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0813 17:29:50.676866    4376 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:29:50.676917    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:29:50.687317    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0813 17:29:50.687337    4376 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0813 17:29:50.687351    4376 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:29:50.687360    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0813 17:29:50.687394    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:29:50.691028    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0813 17:29:50.697058    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0813 17:29:50.698636    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0813 17:29:50.707231    4376 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0813 17:29:50.707250    4376 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0813 17:29:50.707297    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0813 17:29:50.713366    4376 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0813 17:29:50.713474    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:29:50.714629    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0813 17:29:50.718117    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0813 17:29:50.719625    4376 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0813 17:29:50.743995    4376 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0813 17:29:50.744014    4376 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:29:50.744037    4376 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0813 17:29:50.744055    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0813 17:29:50.744050    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0813 17:29:50.743996    4376 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0813 17:29:50.744090    4376 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:29:50.744110    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:29:50.762379    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0813 17:29:50.762383    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0813 17:29:50.762489    4376 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0813 17:29:50.764095    4376 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0813 17:29:50.764108    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0813 17:29:50.773841    4376 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0813 17:29:50.773853    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0813 17:29:50.810589    4376 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0813 17:29:50.810692    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:29:50.828342    4376 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0813 17:29:50.828365    4376 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0813 17:29:50.828371    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0813 17:29:50.830107    4376 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0813 17:29:50.830125    4376 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:29:50.830180    4376 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:29:50.873695    4376 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0813 17:29:50.873718    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 17:29:50.873828    4376 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0813 17:29:50.875203    4376 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0813 17:29:50.875214    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0813 17:29:50.901990    4376 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 17:29:50.902003    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0813 17:29:51.138199    4376 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 17:29:51.138236    4376 cache_images.go:92] duration metric: took 958.026583ms to LoadCachedImages
	W0813 17:29:51.138276    4376 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0813 17:29:51.138283    4376 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0813 17:29:51.138336    4376 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-967000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0813 17:29:51.138402    4376 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0813 17:29:51.152134    4376 cni.go:84] Creating CNI manager for ""
	I0813 17:29:51.152148    4376 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:29:51.152152    4376 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0813 17:29:51.152160    4376 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-967000 NodeName:stopped-upgrade-967000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0813 17:29:51.152231    4376 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-967000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 17:29:51.152488    4376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0813 17:29:51.155512    4376 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 17:29:51.155548    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 17:29:51.157918    4376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0813 17:29:51.162417    4376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 17:29:51.167403    4376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0813 17:29:51.172239    4376 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0813 17:29:51.173405    4376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 17:29:51.177284    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:51.252357    4376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 17:29:51.260039    4376 certs.go:68] Setting up /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000 for IP: 10.0.2.15
	I0813 17:29:51.260048    4376 certs.go:194] generating shared ca certs ...
	I0813 17:29:51.260056    4376 certs.go:226] acquiring lock for ca certs: {Name:mk1c25d4292e2fe754770039b132c434f4539a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:29:51.260216    4376 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.key
	I0813 17:29:51.260267    4376 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/proxy-client-ca.key
	I0813 17:29:51.260273    4376 certs.go:256] generating profile certs ...
	I0813 17:29:51.260365    4376 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/client.key
	I0813 17:29:51.260384    4376 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key.0ca3edb1
	I0813 17:29:51.260396    4376 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt.0ca3edb1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0813 17:29:51.317086    4376 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt.0ca3edb1 ...
	I0813 17:29:51.317112    4376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt.0ca3edb1: {Name:mk47dbff3f8e01159079760cbad8dab7726b13b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:29:51.317649    4376 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key.0ca3edb1 ...
	I0813 17:29:51.317655    4376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key.0ca3edb1: {Name:mk20867504880706023e8d83a4e94a08ecbe57fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:29:51.317810    4376 certs.go:381] copying /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt.0ca3edb1 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt
	I0813 17:29:51.317945    4376 certs.go:385] copying /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key.0ca3edb1 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key
	I0813 17:29:51.318107    4376 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/proxy-client.key
	I0813 17:29:51.318232    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/1635.pem (1338 bytes)
	W0813 17:29:51.318263    4376 certs.go:480] ignoring /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/1635_empty.pem, impossibly tiny 0 bytes
	I0813 17:29:51.318269    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 17:29:51.318299    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem (1082 bytes)
	I0813 17:29:51.318323    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem (1123 bytes)
	I0813 17:29:51.318346    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/key.pem (1675 bytes)
	I0813 17:29:51.318396    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem (1708 bytes)
	I0813 17:29:51.318735    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 17:29:51.325855    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 17:29:51.332843    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 17:29:51.340542    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 17:29:51.347914    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0813 17:29:51.354704    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 17:29:51.361744    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 17:29:51.369143    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 17:29:51.376594    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 17:29:51.383353    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/1635.pem --> /usr/share/ca-certificates/1635.pem (1338 bytes)
	I0813 17:29:51.390024    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem --> /usr/share/ca-certificates/16352.pem (1708 bytes)
	I0813 17:29:51.397167    4376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 17:29:51.402417    4376 ssh_runner.go:195] Run: openssl version
	I0813 17:29:51.404437    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1635.pem && ln -fs /usr/share/ca-certificates/1635.pem /etc/ssl/certs/1635.pem"
	I0813 17:29:51.407590    4376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1635.pem
	I0813 17:29:51.408976    4376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:53 /usr/share/ca-certificates/1635.pem
	I0813 17:29:51.408997    4376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1635.pem
	I0813 17:29:51.410782    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1635.pem /etc/ssl/certs/51391683.0"
	I0813 17:29:51.413802    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16352.pem && ln -fs /usr/share/ca-certificates/16352.pem /etc/ssl/certs/16352.pem"
	I0813 17:29:51.416980    4376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16352.pem
	I0813 17:29:51.418328    4376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:53 /usr/share/ca-certificates/16352.pem
	I0813 17:29:51.418345    4376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16352.pem
	I0813 17:29:51.420076    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16352.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 17:29:51.422797    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 17:29:51.425987    4376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 17:29:51.427373    4376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I0813 17:29:51.427395    4376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 17:29:51.428960    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 17:29:51.431924    4376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0813 17:29:51.433336    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0813 17:29:51.435166    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0813 17:29:51.436880    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0813 17:29:51.438843    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0813 17:29:51.440682    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0813 17:29:51.442602    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0813 17:29:51.444384    4376 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50478 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0813 17:29:51.444445    4376 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0813 17:29:51.455751    4376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 17:29:51.458764    4376 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0813 17:29:51.458770    4376 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0813 17:29:51.458790    4376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0813 17:29:51.462515    4376 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 17:29:51.462825    4376 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-967000" does not appear in /Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:29:51.462926    4376 kubeconfig.go:62] /Users/jenkins/minikube-integration/19429-1127/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-967000" cluster setting kubeconfig missing "stopped-upgrade-967000" context setting]
	I0813 17:29:51.463133    4376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/kubeconfig: {Name:mk4f6a628d9f9f6550ed229faba2a879ed685a75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:29:51.463615    4376 kapi.go:59] client config for stopped-upgrade-967000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/client.key", CAFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105da7e30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 17:29:51.463975    4376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 17:29:51.466668    4376 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-967000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0813 17:29:51.466672    4376 kubeadm.go:1160] stopping kube-system containers ...
	I0813 17:29:51.466709    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0813 17:29:51.477357    4376 docker.go:483] Stopping containers: [39b1c47004b9 f104bd895320 a3733ebf7dbd 19258fc6df7f 7a9b4be4a825 288d1ff2b9f9 9f18fcade693 84ea75f51f17]
	I0813 17:29:51.477429    4376 ssh_runner.go:195] Run: docker stop 39b1c47004b9 f104bd895320 a3733ebf7dbd 19258fc6df7f 7a9b4be4a825 288d1ff2b9f9 9f18fcade693 84ea75f51f17
	I0813 17:29:51.488295    4376 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0813 17:29:51.493603    4376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 17:29:51.496606    4376 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 17:29:51.496616    4376 kubeadm.go:157] found existing configuration files:
	
	I0813 17:29:51.496641    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/admin.conf
	I0813 17:29:51.499123    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0813 17:29:51.499142    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0813 17:29:51.501842    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/kubelet.conf
	I0813 17:29:51.504782    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0813 17:29:51.504802    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0813 17:29:51.507394    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/controller-manager.conf
	I0813 17:29:51.509958    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0813 17:29:51.509978    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 17:29:51.512817    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/scheduler.conf
	I0813 17:29:51.515219    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0813 17:29:51.515241    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 17:29:51.517949    4376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 17:29:51.520845    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:29:51.545038    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:29:52.187007    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:29:52.318199    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:29:52.341932    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:29:52.377033    4376 api_server.go:52] waiting for apiserver process to appear ...
	I0813 17:29:52.377106    4376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:29:52.879174    4376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:29:55.971328    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:55.971438    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:29:55.982922    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:29:55.982992    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:29:55.993894    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:29:55.993960    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:29:56.004670    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:29:56.004740    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:29:56.015346    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:29:56.015416    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:29:56.026150    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:29:56.026219    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:29:56.036729    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:29:56.036798    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:29:56.050706    4162 logs.go:276] 0 containers: []
	W0813 17:29:56.050721    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:29:56.050774    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:29:56.061363    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:29:56.061388    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:29:56.061394    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:29:56.066364    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:29:56.066369    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:29:56.083660    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:29:56.083672    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:29:56.101779    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:29:56.101791    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:29:56.121398    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:29:56.121407    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:29:56.135103    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:29:56.135115    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:29:56.146734    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:29:56.146746    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:29:56.164700    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:29:56.164710    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:29:53.379144    4376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:29:53.383303    4376 api_server.go:72] duration metric: took 1.006286667s to wait for apiserver process to appear ...
	I0813 17:29:53.383313    4376 api_server.go:88] waiting for apiserver healthz status ...
	I0813 17:29:53.383321    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:56.202623    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:29:56.202634    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:29:56.240111    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:29:56.240124    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:29:56.254666    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:29:56.254676    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:29:56.266069    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:29:56.266080    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:29:56.278225    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:29:56.278239    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:29:56.289943    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:29:56.289952    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:29:56.304448    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:29:56.304457    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:29:56.317835    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:29:56.317846    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:29:56.342121    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:29:56.342131    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:29:58.855988    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:58.385373    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:58.385417    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:03.856264    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:03.856520    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:30:03.870111    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:30:03.870185    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:30:03.880827    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:30:03.880899    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:30:03.891539    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:30:03.891609    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:30:03.902425    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:30:03.902499    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:30:03.915182    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:30:03.915249    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:30:03.932116    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:30:03.932189    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:30:03.942515    4162 logs.go:276] 0 containers: []
	W0813 17:30:03.942527    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:30:03.942586    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:30:03.953389    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:30:03.953409    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:30:03.953414    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:30:03.977147    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:30:03.977158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:30:03.996322    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:30:03.996335    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:30:04.007678    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:30:04.007688    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:30:04.020251    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:30:04.020263    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:30:04.058100    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:30:04.058111    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:30:04.062901    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:30:04.062908    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:30:04.098976    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:30:04.098990    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:30:04.113251    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:30:04.113261    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:30:04.137682    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:30:04.137691    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:30:04.149950    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:30:04.149961    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:30:04.164435    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:30:04.164446    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:30:04.175843    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:30:04.175854    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:30:04.189838    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:30:04.189848    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:30:04.201399    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:30:04.201411    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:30:04.218913    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:30:04.218922    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:30:04.230019    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:30:04.230031    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:30:03.385975    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:03.386022    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:06.744906    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:08.386437    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:08.386476    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:11.747085    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:11.747197    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:30:11.759128    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:30:11.759201    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:30:11.770734    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:30:11.770801    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:30:11.782443    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:30:11.782514    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:30:11.798836    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:30:11.798907    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:30:11.809633    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:30:11.809699    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:30:11.822485    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:30:11.822554    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:30:11.834129    4162 logs.go:276] 0 containers: []
	W0813 17:30:11.834140    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:30:11.834190    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:30:11.846042    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:30:11.846064    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:30:11.846073    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:30:11.892968    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:30:11.892983    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:30:11.906636    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:30:11.906650    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:30:11.930451    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:30:11.930469    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:30:11.943623    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:30:11.943637    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:30:11.983061    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:30:11.983078    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:30:11.999840    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:30:11.999854    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:30:12.016981    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:30:12.016993    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:30:12.029605    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:30:12.029620    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:30:12.041431    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:30:12.041442    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:30:12.060715    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:30:12.060726    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:30:12.078694    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:30:12.078704    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:30:12.090733    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:30:12.090745    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:30:12.095347    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:30:12.095353    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:30:12.108959    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:30:12.108970    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:30:12.126455    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:30:12.126465    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:30:12.138098    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:30:12.138110    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:30:14.654233    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:13.387124    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:13.387190    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:19.656390    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:19.656643    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:30:19.679235    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:30:19.679334    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:30:19.695134    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:30:19.695215    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:30:19.707810    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:30:19.707867    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:30:19.719162    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:30:19.719237    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:30:19.729455    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:30:19.729535    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:30:19.740269    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:30:19.740340    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:30:19.754383    4162 logs.go:276] 0 containers: []
	W0813 17:30:19.754397    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:30:19.754467    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:30:19.776553    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:30:19.776570    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:30:19.776576    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:30:19.814778    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:30:19.814790    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:30:19.834831    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:30:19.834841    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:30:19.849179    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:30:19.849191    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:30:19.862668    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:30:19.862679    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:30:19.902335    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:30:19.902345    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:30:19.919291    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:30:19.919302    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:30:19.936504    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:30:19.936515    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:30:19.942758    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:30:19.942768    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:30:19.979117    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:30:19.979139    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:30:19.992282    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:30:19.992294    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:30:20.010425    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:30:20.010438    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:30:20.023306    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:30:20.023319    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:30:20.037438    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:30:20.037451    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:30:20.048913    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:30:20.048924    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:30:20.060389    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:30:20.060398    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:30:20.071932    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:30:20.071943    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:30:18.388064    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:18.388143    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:22.597056    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:23.389315    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:23.389362    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:27.599389    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:27.599672    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:30:27.629970    4162 logs.go:276] 2 containers: [7246d32eed31 4f718d28b77f]
	I0813 17:30:27.630088    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:30:27.646735    4162 logs.go:276] 2 containers: [878df157955d 6a4674d869c5]
	I0813 17:30:27.646814    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:30:27.659837    4162 logs.go:276] 1 containers: [78a0199306c6]
	I0813 17:30:27.659908    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:30:27.671630    4162 logs.go:276] 2 containers: [87e3dbb4478d 711b1c09ff24]
	I0813 17:30:27.671702    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:30:27.682276    4162 logs.go:276] 1 containers: [2bac08e5c49f]
	I0813 17:30:27.682342    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:30:27.696401    4162 logs.go:276] 2 containers: [71d667143dad d06e29ee9496]
	I0813 17:30:27.696469    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:30:27.706966    4162 logs.go:276] 0 containers: []
	W0813 17:30:27.706977    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:30:27.707031    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:30:27.718120    4162 logs.go:276] 2 containers: [464dae67693c 3b8f44d68139]
	I0813 17:30:27.718139    4162 logs.go:123] Gathering logs for etcd [6a4674d869c5] ...
	I0813 17:30:27.718144    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a4674d869c5"
	I0813 17:30:27.737331    4162 logs.go:123] Gathering logs for coredns [78a0199306c6] ...
	I0813 17:30:27.737341    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78a0199306c6"
	I0813 17:30:27.748506    4162 logs.go:123] Gathering logs for kube-scheduler [87e3dbb4478d] ...
	I0813 17:30:27.748517    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87e3dbb4478d"
	I0813 17:30:27.759600    4162 logs.go:123] Gathering logs for kube-controller-manager [d06e29ee9496] ...
	I0813 17:30:27.759610    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d06e29ee9496"
	I0813 17:30:27.771322    4162 logs.go:123] Gathering logs for storage-provisioner [3b8f44d68139] ...
	I0813 17:30:27.771334    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b8f44d68139"
	I0813 17:30:27.782685    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:30:27.782694    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:30:27.823391    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:30:27.823412    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:30:27.827723    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:30:27.827731    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:30:27.861938    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:30:27.861947    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:30:27.884960    4162 logs.go:123] Gathering logs for storage-provisioner [464dae67693c] ...
	I0813 17:30:27.884971    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464dae67693c"
	I0813 17:30:27.896856    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:30:27.896873    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:30:27.908868    4162 logs.go:123] Gathering logs for kube-apiserver [4f718d28b77f] ...
	I0813 17:30:27.908878    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f718d28b77f"
	I0813 17:30:27.929553    4162 logs.go:123] Gathering logs for etcd [878df157955d] ...
	I0813 17:30:27.929564    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878df157955d"
	I0813 17:30:27.943258    4162 logs.go:123] Gathering logs for kube-proxy [2bac08e5c49f] ...
	I0813 17:30:27.943268    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bac08e5c49f"
	I0813 17:30:27.954679    4162 logs.go:123] Gathering logs for kube-apiserver [7246d32eed31] ...
	I0813 17:30:27.954689    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7246d32eed31"
	I0813 17:30:27.969230    4162 logs.go:123] Gathering logs for kube-scheduler [711b1c09ff24] ...
	I0813 17:30:27.969240    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 711b1c09ff24"
	I0813 17:30:27.986521    4162 logs.go:123] Gathering logs for kube-controller-manager [71d667143dad] ...
	I0813 17:30:27.986533    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71d667143dad"
	I0813 17:30:30.509800    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:28.390697    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:28.390750    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:35.512223    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:35.512367    4162 kubeadm.go:597] duration metric: took 4m4.545122792s to restartPrimaryControlPlane
	W0813 17:30:35.512494    4162 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0813 17:30:35.512557    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0813 17:30:36.549383    4162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.036826792s)
	I0813 17:30:36.549448    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0813 17:30:36.554700    4162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 17:30:36.558267    4162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 17:30:36.561729    4162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 17:30:36.561739    4162 kubeadm.go:157] found existing configuration files:
	
	I0813 17:30:36.561777    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0813 17:30:36.564739    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0813 17:30:36.564781    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0813 17:30:36.567504    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0813 17:30:36.570447    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0813 17:30:36.570486    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0813 17:30:36.573628    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0813 17:30:36.576885    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0813 17:30:36.576915    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 17:30:36.579801    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0813 17:30:36.582477    4162 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0813 17:30:36.582516    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 17:30:36.586013    4162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0813 17:30:36.603001    4162 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0813 17:30:36.603031    4162 kubeadm.go:310] [preflight] Running pre-flight checks
	I0813 17:30:36.652004    4162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 17:30:36.652060    4162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 17:30:36.652115    4162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0813 17:30:36.704672    4162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 17:30:36.708665    4162 out.go:204]   - Generating certificates and keys ...
	I0813 17:30:36.708698    4162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0813 17:30:36.708725    4162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0813 17:30:36.708756    4162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0813 17:30:36.708782    4162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0813 17:30:36.708810    4162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0813 17:30:36.708832    4162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0813 17:30:36.708872    4162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0813 17:30:36.708897    4162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0813 17:30:36.708927    4162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0813 17:30:36.708958    4162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0813 17:30:36.708974    4162 kubeadm.go:310] [certs] Using the existing "sa" key
	I0813 17:30:36.708996    4162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 17:30:36.824090    4162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 17:30:36.935114    4162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 17:30:37.112307    4162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 17:30:37.205538    4162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 17:30:37.233381    4162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 17:30:37.233906    4162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 17:30:37.233958    4162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0813 17:30:37.319903    4162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 17:30:33.392735    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:33.392803    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:37.323760    4162 out.go:204]   - Booting up control plane ...
	I0813 17:30:37.323804    4162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 17:30:37.323838    4162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 17:30:37.323867    4162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 17:30:37.323906    4162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 17:30:37.323980    4162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0813 17:30:38.395100    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:38.395121    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:42.324377    4162 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002598 seconds
	I0813 17:30:42.324525    4162 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 17:30:42.334780    4162 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 17:30:42.845063    4162 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0813 17:30:42.845181    4162 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-126000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 17:30:43.350168    4162 kubeadm.go:310] [bootstrap-token] Using token: zkcav3.7ynvfpmi1ev3k3bj
	I0813 17:30:43.356588    4162 out.go:204]   - Configuring RBAC rules ...
	I0813 17:30:43.356664    4162 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 17:30:43.356734    4162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 17:30:43.359212    4162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 17:30:43.364185    4162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 17:30:43.365496    4162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 17:30:43.367199    4162 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 17:30:43.370686    4162 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 17:30:43.514517    4162 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0813 17:30:43.754998    4162 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0813 17:30:43.755563    4162 kubeadm.go:310] 
	I0813 17:30:43.755598    4162 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0813 17:30:43.755608    4162 kubeadm.go:310] 
	I0813 17:30:43.755666    4162 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0813 17:30:43.755672    4162 kubeadm.go:310] 
	I0813 17:30:43.755688    4162 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0813 17:30:43.755727    4162 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 17:30:43.755758    4162 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 17:30:43.755762    4162 kubeadm.go:310] 
	I0813 17:30:43.755795    4162 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0813 17:30:43.755798    4162 kubeadm.go:310] 
	I0813 17:30:43.755839    4162 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 17:30:43.755860    4162 kubeadm.go:310] 
	I0813 17:30:43.755889    4162 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0813 17:30:43.755936    4162 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 17:30:43.756023    4162 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 17:30:43.756031    4162 kubeadm.go:310] 
	I0813 17:30:43.756080    4162 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0813 17:30:43.756129    4162 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0813 17:30:43.756135    4162 kubeadm.go:310] 
	I0813 17:30:43.756196    4162 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zkcav3.7ynvfpmi1ev3k3bj \
	I0813 17:30:43.756258    4162 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:94a653d9144e0f51dbf8cb0881c67d995fb93f16972a5a4e4bd9f3c8d4a5aa34 \
	I0813 17:30:43.756273    4162 kubeadm.go:310] 	--control-plane 
	I0813 17:30:43.756282    4162 kubeadm.go:310] 
	I0813 17:30:43.756368    4162 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0813 17:30:43.756375    4162 kubeadm.go:310] 
	I0813 17:30:43.756431    4162 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zkcav3.7ynvfpmi1ev3k3bj \
	I0813 17:30:43.756504    4162 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:94a653d9144e0f51dbf8cb0881c67d995fb93f16972a5a4e4bd9f3c8d4a5aa34 
	I0813 17:30:43.756569    4162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 17:30:43.756578    4162 cni.go:84] Creating CNI manager for ""
	I0813 17:30:43.756587    4162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:30:43.760671    4162 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 17:30:43.768644    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0813 17:30:43.771919    4162 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0813 17:30:43.776811    4162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 17:30:43.776874    4162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 17:30:43.776874    4162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-126000 minikube.k8s.io/updated_at=2024_08_13T17_30_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=running-upgrade-126000 minikube.k8s.io/primary=true
	I0813 17:30:43.829232    4162 ops.go:34] apiserver oom_adj: -16
	I0813 17:30:43.829232    4162 kubeadm.go:1113] duration metric: took 52.408584ms to wait for elevateKubeSystemPrivileges
	I0813 17:30:43.829246    4162 kubeadm.go:394] duration metric: took 4m12.875502459s to StartCluster
	I0813 17:30:43.829257    4162 settings.go:142] acquiring lock: {Name:mkaf11e998595d0fbc8bedb0051c4325b4dc127d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:30:43.829342    4162 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:30:43.829720    4162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/kubeconfig: {Name:mk4f6a628d9f9f6550ed229faba2a879ed685a75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:30:43.830188    4162 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:30:43.830276    4162 config.go:182] Loaded profile config "running-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:30:43.830253    4162 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0813 17:30:43.830292    4162 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-126000"
	I0813 17:30:43.830300    4162 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-126000"
	I0813 17:30:43.830305    4162 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-126000"
	W0813 17:30:43.830308    4162 addons.go:243] addon storage-provisioner should already be in state true
	I0813 17:30:43.830315    4162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-126000"
	I0813 17:30:43.830319    4162 host.go:66] Checking if "running-upgrade-126000" exists ...
	I0813 17:30:43.831254    4162 kapi.go:59] client config for running-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/running-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045cbe30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 17:30:43.831375    4162 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-126000"
	W0813 17:30:43.831380    4162 addons.go:243] addon default-storageclass should already be in state true
	I0813 17:30:43.831387    4162 host.go:66] Checking if "running-upgrade-126000" exists ...
	I0813 17:30:43.834456    4162 out.go:177] * Verifying Kubernetes components...
	I0813 17:30:43.834767    4162 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 17:30:43.838721    4162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 17:30:43.838729    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/running-upgrade-126000/id_rsa Username:docker}
	I0813 17:30:43.842381    4162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:30:43.846575    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:30:43.850619    4162 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 17:30:43.850624    4162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 17:30:43.850631    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/running-upgrade-126000/id_rsa Username:docker}
	I0813 17:30:43.946193    4162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 17:30:43.951181    4162 api_server.go:52] waiting for apiserver process to appear ...
	I0813 17:30:43.951237    4162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:30:43.955286    4162 api_server.go:72] duration metric: took 125.087334ms to wait for apiserver process to appear ...
	I0813 17:30:43.955294    4162 api_server.go:88] waiting for apiserver healthz status ...
	I0813 17:30:43.955301    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:43.992680    4162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 17:30:44.020183    4162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 17:30:44.329073    4162 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0813 17:30:44.329086    4162 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0813 17:30:43.396631    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:43.396650    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:48.957367    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:48.957404    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:48.397772    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:48.397844    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:53.957671    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:53.957705    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:53.400249    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:53.400359    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:30:53.412061    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:30:53.412139    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:30:53.422764    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:30:53.422842    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:30:53.434078    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:30:53.434160    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:30:53.444432    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:30:53.444514    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:30:53.455336    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:30:53.455415    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:30:53.465791    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:30:53.465865    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:30:53.476224    4376 logs.go:276] 0 containers: []
	W0813 17:30:53.476236    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:30:53.476302    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:30:53.487103    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:30:53.487122    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:30:53.487129    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:30:53.505243    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:30:53.505254    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:30:53.532410    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:30:53.532423    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:30:53.547220    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:30:53.547231    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:30:53.559619    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:30:53.559631    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:30:53.571088    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:30:53.571100    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:30:53.616224    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:30:53.616242    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:30:53.628212    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:30:53.628225    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:30:53.640223    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:30:53.640234    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:30:53.660282    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:30:53.660294    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:30:53.673126    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:30:53.673137    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:30:53.686018    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:30:53.686031    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:30:53.701676    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:30:53.701688    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:30:53.713389    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:30:53.713398    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:30:53.752901    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:30:53.752912    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:30:53.757383    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:30:53.757390    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:30:53.866007    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:30:53.866020    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:30:56.382602    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:58.957950    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:58.957997    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:01.384443    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:01.384658    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:01.407776    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:01.407906    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:01.422844    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:01.422948    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:01.435679    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:01.435770    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:01.448097    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:01.448171    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:01.458989    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:01.459065    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:01.469509    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:01.469575    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:01.479543    4376 logs.go:276] 0 containers: []
	W0813 17:31:01.479558    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:01.479630    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:01.490394    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:01.490414    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:01.490420    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:01.527543    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:01.527554    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:01.568023    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:01.568034    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:01.587351    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:01.587361    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:01.598505    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:01.598519    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:01.610914    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:01.610925    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:01.614993    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:01.615000    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:01.627192    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:01.627206    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:01.639167    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:01.639187    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:01.656683    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:01.656694    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:01.669807    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:01.669819    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:01.682804    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:01.682817    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:01.697226    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:01.697235    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:01.711390    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:01.711401    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:01.722941    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:01.722951    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:01.735711    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:01.735722    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:01.761125    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:01.761135    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:03.958388    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:03.958426    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:04.302562    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:08.958978    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:08.959001    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:09.304848    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:09.305030    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:09.325625    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:09.325739    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:09.339578    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:09.339660    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:09.351043    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:09.351107    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:09.361840    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:09.361937    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:09.372228    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:09.372304    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:09.382530    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:09.382598    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:09.392050    4376 logs.go:276] 0 containers: []
	W0813 17:31:09.392061    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:09.392127    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:09.402563    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:09.402583    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:09.402590    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:09.414077    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:09.414089    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:09.425458    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:09.425469    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:09.465812    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:09.465821    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:09.482739    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:09.482749    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:09.496112    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:09.496123    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:09.514994    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:09.515004    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:09.528981    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:09.528993    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:09.541068    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:09.541079    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:09.552771    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:09.552781    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:09.565173    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:09.565184    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:09.604266    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:09.604278    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:09.615964    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:09.615977    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:09.641170    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:09.641183    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:09.645911    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:09.645918    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:09.681407    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:09.681419    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:09.692930    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:09.692942    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:12.207632    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:13.959862    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:13.959884    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0813 17:31:14.331121    4162 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0813 17:31:14.337593    4162 out.go:177] * Enabled addons: storage-provisioner
	I0813 17:31:14.349404    4162 addons.go:510] duration metric: took 30.519589375s for enable addons: enabled=[storage-provisioner]
	I0813 17:31:17.209863    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:17.210016    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:17.223584    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:17.223674    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:17.235265    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:17.235342    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:17.246195    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:17.246278    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:17.256891    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:17.256965    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:17.270417    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:17.270488    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:17.280712    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:17.280796    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:17.290557    4376 logs.go:276] 0 containers: []
	W0813 17:31:17.290570    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:17.290647    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:17.301377    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:17.301396    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:17.301402    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:17.339950    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:17.339959    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:17.374521    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:17.374532    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:17.389308    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:17.389318    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:17.401501    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:17.401512    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:17.413910    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:17.413920    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:17.425510    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:17.425521    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:17.437279    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:17.437289    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:17.449715    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:17.449724    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:17.467286    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:17.467296    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:17.481047    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:17.481058    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:17.498277    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:17.498287    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:17.509310    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:17.509321    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:17.521375    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:17.521385    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:17.525922    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:17.525929    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:17.566220    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:17.566235    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:17.578127    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:17.578139    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:18.960740    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:18.960760    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:20.106635    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:23.961894    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:23.961933    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:25.107288    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:25.107483    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:25.127538    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:25.127637    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:25.142319    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:25.142392    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:25.156066    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:25.156143    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:25.169757    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:25.169828    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:25.180829    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:25.180908    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:25.191198    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:25.191265    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:25.201548    4376 logs.go:276] 0 containers: []
	W0813 17:31:25.201559    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:25.201627    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:25.212304    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:25.212323    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:25.212328    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:25.226170    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:25.226180    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:25.237561    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:25.237573    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:25.255151    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:25.255162    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:25.267460    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:25.267471    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:25.292784    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:25.292793    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:25.305054    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:25.305065    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:25.318938    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:25.318949    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:25.330912    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:25.330922    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:25.346095    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:25.346105    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:25.384120    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:25.384130    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:25.398403    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:25.398414    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:25.435156    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:25.435166    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:25.449925    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:25.449937    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:25.462763    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:25.462773    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:25.499991    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:25.499999    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:25.503863    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:25.503870    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:28.963524    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:28.963550    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:28.019638    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:33.963901    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:33.963922    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:33.021829    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:33.021930    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:33.036737    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:33.036819    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:33.047632    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:33.047712    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:33.058312    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:33.058377    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:33.068939    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:33.069015    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:33.079369    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:33.079450    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:33.090247    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:33.090323    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:33.105180    4376 logs.go:276] 0 containers: []
	W0813 17:31:33.105192    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:33.105258    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:33.115902    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:33.115923    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:33.115929    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:33.153430    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:33.153441    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:33.164657    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:33.164669    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:33.176079    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:33.176091    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:33.201373    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:33.201382    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:33.206054    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:33.206061    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:33.220140    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:33.220151    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:33.236435    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:33.236446    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:33.275283    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:33.275295    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:33.289779    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:33.289789    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:33.301143    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:33.301156    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:33.313923    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:33.313934    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:33.325592    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:33.325601    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:33.339397    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:33.339409    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:33.377546    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:33.377557    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:33.389092    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:33.389104    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:33.401177    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:33.401188    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:35.920451    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:38.965892    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:38.965941    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:40.922723    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:40.922919    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:40.942850    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:40.942947    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:40.956495    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:40.956579    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:40.967651    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:40.967727    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:40.978838    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:40.978918    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:40.993724    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:40.993801    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:41.006774    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:41.006851    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:41.017164    4376 logs.go:276] 0 containers: []
	W0813 17:31:41.017174    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:41.017230    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:41.027209    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:41.027228    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:41.027234    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:41.039277    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:41.039291    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:41.043831    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:41.043838    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:41.058436    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:41.058447    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:41.071167    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:41.071178    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:41.088157    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:41.088167    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:41.099623    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:41.099635    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:41.120713    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:41.120725    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:41.141842    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:41.141852    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:41.165158    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:41.165166    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:41.176776    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:41.176788    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:41.190710    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:41.190722    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:41.202691    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:41.202703    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:41.214255    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:41.214268    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:41.252370    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:41.252379    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:41.292857    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:41.292868    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:41.307156    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:41.307166    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:43.968111    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:43.968202    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:43.978749    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:31:43.978818    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:43.990253    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:31:43.990331    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:44.001287    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:31:44.001374    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:44.011531    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:31:44.011613    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:44.021707    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:31:44.021775    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:44.032082    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:31:44.032167    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:44.042763    4162 logs.go:276] 0 containers: []
	W0813 17:31:44.042775    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:44.042844    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:44.053240    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:31:44.053255    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:31:44.053261    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:31:44.067823    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:31:44.067834    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:31:44.080166    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:31:44.080178    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:31:44.094656    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:31:44.094666    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:31:44.106869    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:31:44.106880    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:31:44.128617    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:31:44.128628    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:31:44.140150    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:44.140161    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:44.175002    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:44.175012    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:44.209002    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:44.209013    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:44.233582    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:31:44.233590    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:44.245526    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:31:44.245537    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:31:44.256942    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:44.256952    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:44.261590    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:31:44.261595    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:31:43.847777    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:46.777526    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:48.849946    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:48.850202    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:48.875035    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:48.875167    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:48.892008    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:48.892110    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:48.905112    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:48.905195    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:48.917231    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:48.917314    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:48.927487    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:48.927567    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:48.938904    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:48.938980    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:48.949724    4376 logs.go:276] 0 containers: []
	W0813 17:31:48.949737    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:48.949800    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:48.960832    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:48.960849    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:48.960855    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:48.972666    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:48.972679    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:48.987410    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:48.987421    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:49.030081    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:49.030096    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:49.048175    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:49.048188    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:49.059956    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:49.059968    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:49.071394    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:49.071404    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:49.095069    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:49.095080    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:49.099555    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:49.099563    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:49.135330    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:49.135341    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:49.150665    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:49.150674    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:49.163081    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:49.163096    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:49.174790    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:49.174801    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:49.193291    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:49.193303    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:49.205410    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:49.205420    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:49.218009    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:49.218019    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:49.237067    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:49.237078    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:51.777832    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:51.778005    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:51.778144    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:51.795488    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:31:51.795599    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:51.808818    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:31:51.808902    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:51.821521    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:31:51.821603    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:51.831519    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:31:51.831591    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:51.841955    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:31:51.842037    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:51.852700    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:31:51.852768    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:51.862686    4162 logs.go:276] 0 containers: []
	W0813 17:31:51.862699    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:51.862760    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:51.873404    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:31:51.873417    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:51.873422    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:51.898789    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:31:51.898798    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:51.912077    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:51.912089    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:51.948129    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:31:51.948138    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:31:51.962468    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:31:51.962478    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:31:51.976342    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:31:51.976354    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:31:51.988140    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:31:51.988151    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:31:52.005338    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:31:52.005350    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:31:52.016829    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:52.016840    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:52.022998    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:52.023005    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:52.061154    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:31:52.061165    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:31:52.072551    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:31:52.072562    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:31:52.087001    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:31:52.087011    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:31:54.601037    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:56.780019    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:56.780137    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:56.793295    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:56.793389    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:56.804291    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:56.804368    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:56.819052    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:56.819130    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:56.830134    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:56.830215    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:56.841533    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:56.841603    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:56.852843    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:56.852931    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:56.863791    4376 logs.go:276] 0 containers: []
	W0813 17:31:56.863803    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:56.863864    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:56.874352    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:56.874370    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:56.874376    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:56.878605    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:56.878611    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:56.916818    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:56.916831    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:56.935387    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:56.935398    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:56.948198    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:56.948210    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:56.960162    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:56.960174    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:57.000330    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:57.000343    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:57.014973    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:57.014985    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:57.029419    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:57.029432    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:57.053572    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:57.053587    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:57.091919    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:57.091931    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:57.106397    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:57.106408    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:57.117427    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:57.117439    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:57.129564    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:57.129574    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:57.141243    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:57.141254    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:57.153238    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:57.153249    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:57.165322    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:57.165333    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:59.603381    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:59.603724    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:59.638922    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:31:59.639061    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:59.658863    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:31:59.658973    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:59.675657    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:31:59.675730    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:59.687257    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:31:59.687327    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:59.697540    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:31:59.697613    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:59.708208    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:31:59.708283    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:59.719486    4162 logs.go:276] 0 containers: []
	W0813 17:31:59.719500    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:59.719562    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:59.729725    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:31:59.729747    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:59.729753    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:59.734251    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:31:59.734259    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:31:59.748854    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:31:59.748868    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:31:59.760627    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:31:59.760638    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:31:59.776288    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:31:59.776300    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:59.792371    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:59.792383    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:59.827187    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:59.827195    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:59.863712    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:31:59.863723    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:31:59.877539    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:31:59.877549    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:31:59.888774    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:31:59.888785    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:31:59.903668    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:31:59.903679    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:31:59.915641    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:31:59.915652    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:31:59.933588    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:59.933597    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:59.679512    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:02.458080    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:04.681752    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:04.682078    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:04.719205    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:04.719357    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:04.736625    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:04.736725    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:04.749612    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:04.749702    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:04.761013    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:04.761095    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:04.771113    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:04.771189    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:04.781266    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:04.781350    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:04.791372    4376 logs.go:276] 0 containers: []
	W0813 17:32:04.791387    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:04.791454    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:04.802313    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:04.802333    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:04.802338    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:04.816784    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:04.816795    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:04.830677    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:04.830687    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:04.855542    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:04.855549    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:04.874507    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:04.874518    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:04.886699    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:04.886710    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:04.898913    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:04.898926    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:04.903115    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:04.903123    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:04.940511    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:04.940525    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:04.962943    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:04.962953    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:04.974187    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:04.974197    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:04.986507    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:04.986519    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:05.023617    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:05.023625    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:05.063044    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:05.063056    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:05.075236    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:05.075247    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:05.086956    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:05.086967    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:05.103724    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:05.103737    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:07.617430    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:07.458862    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:07.459019    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:07.473444    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:07.473535    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:07.484722    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:07.484794    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:07.494625    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:32:07.494701    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:07.505062    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:07.505136    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:07.515536    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:07.515600    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:07.525540    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:07.525614    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:07.535474    4162 logs.go:276] 0 containers: []
	W0813 17:32:07.535489    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:07.535559    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:07.545417    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:07.545435    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:07.545442    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:07.557416    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:07.557426    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:07.571509    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:07.571518    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:07.582712    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:07.582722    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:07.594411    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:07.594422    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:07.612880    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:07.612891    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:07.624204    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:07.624217    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:07.639677    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:07.639688    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:07.663222    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:07.663233    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:07.697463    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:07.697474    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:07.702488    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:07.702494    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:07.736795    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:07.736806    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:07.752680    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:07.752691    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:10.265942    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:12.619554    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:12.619718    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:12.641171    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:12.641263    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:12.655208    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:12.655295    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:12.666113    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:12.666188    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:12.680792    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:12.680869    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:12.693298    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:12.693375    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:12.705006    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:12.705085    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:12.715438    4376 logs.go:276] 0 containers: []
	W0813 17:32:12.715452    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:12.715516    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:12.725994    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:12.726015    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:12.726021    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:12.742604    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:12.742615    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:12.754925    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:12.754935    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:12.767224    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:12.767235    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:12.792156    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:12.792167    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:12.828389    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:12.828399    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:12.839429    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:12.839440    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:12.850961    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:12.850972    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:12.863043    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:12.863053    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:12.901484    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:12.901495    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:12.937225    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:12.937236    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:12.951354    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:12.951364    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:12.963534    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:12.963546    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:12.984387    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:12.984400    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:12.998118    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:12.998127    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:15.268148    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:15.268409    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:15.290186    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:15.290307    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:15.307500    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:15.307600    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:15.320840    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:32:15.320927    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:15.332175    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:15.332259    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:15.342865    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:15.342956    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:15.353298    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:15.353373    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:15.363606    4162 logs.go:276] 0 containers: []
	W0813 17:32:15.363618    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:15.363690    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:15.374078    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:15.374091    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:15.374099    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:15.387221    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:15.387233    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:15.406901    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:15.406912    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:15.418792    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:15.418807    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:15.434307    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:15.434320    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:15.446485    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:15.446496    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:15.481564    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:15.481575    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:15.496786    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:15.496799    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:15.510574    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:15.510584    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:15.522128    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:15.522138    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:15.540120    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:15.540129    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:15.563187    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:15.563195    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:15.567540    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:15.567548    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:13.002453    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:13.002460    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:13.016240    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:13.016251    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:15.532273    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:18.107287    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:20.534458    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:20.534624    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:20.554003    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:20.554117    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:20.568573    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:20.568660    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:20.582351    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:20.582427    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:20.593442    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:20.593522    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:20.603961    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:20.604033    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:20.614460    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:20.614541    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:20.625384    4376 logs.go:276] 0 containers: []
	W0813 17:32:20.625399    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:20.625460    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:20.635732    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:20.635751    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:20.635757    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:20.653187    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:20.653198    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:20.688535    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:20.688548    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:20.703856    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:20.703868    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:20.715613    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:20.715626    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:20.728149    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:20.728158    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:20.742973    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:20.742983    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:20.761655    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:20.761665    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:20.773005    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:20.773015    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:20.797446    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:20.797455    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:20.835994    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:20.836004    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:20.874444    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:20.874458    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:20.890794    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:20.890805    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:20.906118    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:20.906134    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:20.910977    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:20.910984    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:20.925302    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:20.925313    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:20.937291    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:20.937302    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:23.109498    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:23.109595    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:23.121633    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:23.121722    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:23.132197    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:23.132270    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:23.142378    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:32:23.142457    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:23.155616    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:23.155688    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:23.166122    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:23.166197    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:23.176766    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:23.176832    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:23.186939    4162 logs.go:276] 0 containers: []
	W0813 17:32:23.186951    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:23.187023    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:23.199376    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:23.199391    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:23.199396    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:23.235056    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:23.235068    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:23.271030    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:23.271042    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:23.283002    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:23.283015    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:23.297360    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:23.297371    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:23.308824    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:23.308835    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:23.326651    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:23.326662    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:23.339423    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:23.339434    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:23.344011    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:23.344018    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:23.359071    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:23.359082    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:23.374125    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:23.374137    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:23.385910    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:23.385921    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:23.405153    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:23.405164    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:25.930937    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:23.451059    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:30.930996    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:30.931403    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:30.962172    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:30.962310    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:30.980846    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:30.980955    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:30.995353    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:32:30.995438    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:31.007049    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:31.007124    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:31.017135    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:31.017211    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:31.027300    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:31.027375    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:31.037675    4162 logs.go:276] 0 containers: []
	W0813 17:32:31.037688    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:31.037750    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:31.048187    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:31.048201    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:31.048205    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:31.083509    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:31.083517    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:31.098521    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:31.098530    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:31.110844    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:31.110856    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:31.125383    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:31.125392    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:31.137994    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:31.138005    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:31.149257    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:31.149267    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:31.173111    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:31.173120    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:31.184592    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:31.184602    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:28.452317    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:28.452567    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:28.477950    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:28.478046    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:28.490850    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:28.490932    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:28.502404    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:28.502482    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:28.512963    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:28.513044    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:28.526317    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:28.526391    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:28.537366    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:28.537440    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:28.547511    4376 logs.go:276] 0 containers: []
	W0813 17:32:28.547528    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:28.547596    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:28.558319    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:28.558337    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:28.558344    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:28.569135    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:28.569149    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:28.573516    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:28.573522    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:28.611154    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:28.611165    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:28.624006    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:28.624016    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:28.639171    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:28.639181    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:28.657112    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:28.657122    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:28.681735    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:28.681747    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:28.718215    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:28.718227    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:28.732943    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:28.732955    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:28.752989    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:28.753000    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:28.765160    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:28.765173    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:28.778846    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:28.778858    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:28.790796    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:28.790807    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:28.830105    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:28.830121    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:28.844959    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:28.844970    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:28.857000    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:28.857010    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:31.368582    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:31.189150    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:31.189156    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:31.224320    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:31.224332    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:31.239259    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:31.239269    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:31.253687    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:31.253696    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:33.776188    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:36.368444    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:36.368719    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:36.399095    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:36.399241    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:36.425080    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:36.425182    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:36.445332    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:36.445412    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:36.459100    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:36.459176    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:36.470299    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:36.470373    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:36.481185    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:36.481264    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:36.500146    4376 logs.go:276] 0 containers: []
	W0813 17:32:36.500159    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:36.500220    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:36.510497    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:36.510513    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:36.510518    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:36.524849    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:36.524860    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:36.535760    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:36.535772    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:36.548495    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:36.548504    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:36.562527    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:36.562539    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:36.601760    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:36.601770    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:36.636820    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:36.636830    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:36.651211    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:36.651223    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:36.665799    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:36.665814    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:36.684289    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:36.684301    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:36.695585    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:36.695595    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:36.705053    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:36.705060    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:36.749790    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:36.749801    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:36.773907    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:36.773916    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:36.788554    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:36.788566    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:36.802565    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:36.802577    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:36.814656    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:36.814667    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:38.776472    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:38.776607    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:38.798639    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:38.798720    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:38.816543    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:38.816616    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:38.827969    4162 logs.go:276] 2 containers: [f436aa55d977 538bf00465c8]
	I0813 17:32:38.828041    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:38.838639    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:38.838707    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:38.849833    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:38.849891    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:38.861230    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:38.861309    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:38.872663    4162 logs.go:276] 0 containers: []
	W0813 17:32:38.872685    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:38.872803    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:38.883752    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:38.883766    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:38.883771    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:38.899886    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:38.899898    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:38.912942    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:38.912953    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:38.932490    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:38.932504    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:38.958315    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:38.958329    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:38.993881    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:38.993892    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:39.008117    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:39.008128    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:39.022381    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:39.022392    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:39.033533    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:39.033544    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:39.045102    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:39.045112    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:39.049568    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:39.049574    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:39.085284    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:39.085294    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:39.097045    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:39.097056    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:39.328000    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:41.614436    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:44.328672    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:44.328768    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:44.340328    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:44.340406    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:44.350742    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:44.350818    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:44.361555    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:44.361631    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:44.371782    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:44.371860    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:44.382428    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:44.382506    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:44.392988    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:44.393060    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:44.403196    4376 logs.go:276] 0 containers: []
	W0813 17:32:44.403207    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:44.403266    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:44.414033    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:44.414052    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:44.414067    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:44.426025    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:44.426036    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:44.449273    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:44.449284    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:44.465588    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:44.465597    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:44.478180    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:44.478190    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:44.496107    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:44.496117    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:44.514334    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:44.514346    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:44.527946    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:44.527957    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:44.540024    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:44.540036    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:44.551932    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:44.551943    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:44.591422    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:44.591430    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:44.595573    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:44.595580    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:44.636505    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:44.636515    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:44.674081    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:44.674092    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:44.685807    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:44.685818    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:44.698135    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:44.698146    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:44.709959    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:44.709968    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:47.223480    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:46.615440    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:46.615731    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:46.644829    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:46.644973    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:46.668028    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:46.668114    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:46.681445    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:32:46.681527    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:46.693433    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:46.693505    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:46.704241    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:46.704317    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:46.723944    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:46.724026    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:46.734135    4162 logs.go:276] 0 containers: []
	W0813 17:32:46.734148    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:46.734215    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:46.744193    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:46.744212    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:46.744218    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:46.763259    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:46.763270    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:46.778037    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:46.778048    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:46.789934    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:46.789945    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:46.804485    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:46.804495    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:46.816196    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:46.816207    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:46.849170    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:32:46.849179    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:32:46.860144    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:32:46.860154    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:32:46.871304    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:46.871318    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:46.883189    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:46.883199    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:46.895945    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:46.895956    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:46.913760    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:46.913772    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:46.925346    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:46.925355    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:46.930219    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:46.930226    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:46.965290    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:46.965302    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:49.492302    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:52.224996    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:52.225346    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:52.268461    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:52.268612    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:52.288701    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:52.288814    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:52.303342    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:52.303430    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:52.316177    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:52.316247    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:52.328392    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:52.328461    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:52.339243    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:52.339313    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:52.349928    4376 logs.go:276] 0 containers: []
	W0813 17:32:52.349940    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:52.350009    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:52.360824    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:52.360845    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:52.360853    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:52.401454    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:52.401469    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:52.438931    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:52.438944    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:52.457226    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:52.457237    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:52.469839    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:52.469850    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:52.482481    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:52.482491    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:52.494408    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:52.494420    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:52.507622    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:52.507632    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:52.519112    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:52.519123    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:52.534035    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:52.534045    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:52.549593    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:52.549606    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:52.567085    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:52.567099    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:52.595321    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:52.595333    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:52.617961    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:52.617972    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:52.622749    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:52.622756    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:52.661406    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:52.661417    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:52.679924    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:52.679939    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:54.493827    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:54.493947    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:54.506674    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:32:54.506755    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:54.518741    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:32:54.518833    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:54.530080    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:32:54.530155    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:54.540653    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:32:54.540739    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:54.552511    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:32:54.552585    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:54.564093    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:32:54.564169    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:54.574873    4162 logs.go:276] 0 containers: []
	W0813 17:32:54.574889    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:54.574954    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:54.585888    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:32:54.585908    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:54.585913    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:54.620135    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:54.620146    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:54.655004    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:32:54.655015    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:32:54.668289    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:32:54.668302    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:32:54.683420    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:32:54.683431    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:32:54.699070    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:32:54.699081    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:32:54.712776    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:32:54.712787    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:32:54.724775    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:54.724785    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:54.749880    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:32:54.749892    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:54.761354    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:54.761367    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:54.765980    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:32:54.765988    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:32:54.778003    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:32:54.778014    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:32:54.792244    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:32:54.792255    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:32:54.807061    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:32:54.807072    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:32:54.828817    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:32:54.828829    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:32:55.194091    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:57.342705    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:00.195947    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:00.196294    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:00.231942    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:00.232097    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:00.251425    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:00.251550    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:00.268802    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:00.268888    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:00.281595    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:00.281673    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:00.292239    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:00.292307    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:00.302891    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:00.302963    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:00.313130    4376 logs.go:276] 0 containers: []
	W0813 17:33:00.313141    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:00.313206    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:00.324285    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:00.324305    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:00.324311    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:00.362020    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:00.362028    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:00.398810    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:00.398821    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:00.437990    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:00.438005    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:00.453439    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:00.453449    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:00.469221    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:00.469231    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:00.481746    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:00.481757    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:00.486124    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:00.486130    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:00.505698    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:00.505709    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:00.525914    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:00.525924    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:00.537743    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:00.537755    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:00.552669    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:00.552680    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:00.564407    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:00.564419    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:00.580134    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:00.580145    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:00.591653    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:00.591665    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:00.603773    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:00.603785    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:00.615027    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:00.615038    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:02.344532    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:02.344796    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:02.374622    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:02.374748    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:02.392774    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:02.392888    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:02.407417    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:02.407504    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:02.419476    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:02.419547    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:02.430440    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:02.430517    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:02.441393    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:02.441474    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:02.453355    4162 logs.go:276] 0 containers: []
	W0813 17:33:02.453369    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:02.453434    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:02.464515    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:02.464536    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:02.464542    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:02.478731    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:02.478741    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:02.491142    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:02.491154    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:02.495602    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:02.495608    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:02.506889    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:02.506900    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:02.523756    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:02.523765    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:02.549216    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:02.549225    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:02.560817    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:02.560828    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:02.596486    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:02.596496    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:02.608347    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:02.608360    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:02.623527    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:02.623537    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:02.634918    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:02.634929    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:02.670129    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:02.670140    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:02.689669    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:02.689680    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:02.702147    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:02.702159    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:05.216093    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:03.138823    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:10.218364    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:10.218567    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:10.237173    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:10.237257    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:10.251823    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:10.251904    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:10.263603    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:10.263679    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:10.274400    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:10.274476    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:10.287172    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:10.287240    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:10.297581    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:10.297657    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:10.307862    4162 logs.go:276] 0 containers: []
	W0813 17:33:10.307876    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:10.307943    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:10.318531    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:10.318549    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:10.318554    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:10.352085    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:10.352094    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:10.365709    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:10.365721    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:10.377669    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:10.377678    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:10.392410    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:10.392420    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:10.403785    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:10.403793    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:10.417402    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:10.417412    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:10.454030    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:10.454041    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:10.468347    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:10.468358    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:10.481917    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:10.481929    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:10.502676    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:10.502692    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:10.509476    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:10.509486    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:10.520415    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:10.520426    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:10.532824    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:10.532835    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:10.544111    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:10.544122    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:08.140788    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:08.141009    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:08.163265    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:08.163374    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:08.176921    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:08.177014    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:08.190070    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:08.190149    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:08.200785    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:08.200862    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:08.211528    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:08.211603    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:08.221876    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:08.221950    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:08.233036    4376 logs.go:276] 0 containers: []
	W0813 17:33:08.233048    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:08.233107    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:08.243707    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:08.243727    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:08.243734    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:08.262458    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:08.262470    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:08.274685    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:08.274695    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:08.299553    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:08.299563    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:08.311615    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:08.311626    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:08.316042    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:08.316048    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:08.357350    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:08.357361    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:08.371914    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:08.371926    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:08.383406    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:08.383417    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:08.394412    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:08.394423    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:08.418273    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:08.418281    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:08.432456    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:08.432467    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:08.444567    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:08.444580    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:08.462385    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:08.462404    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:08.475711    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:08.475722    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:08.488159    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:08.488170    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:08.527960    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:08.527971    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:11.064633    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:13.071407    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:16.065063    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:16.065258    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:16.081328    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:16.081422    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:16.099422    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:16.099499    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:16.109914    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:16.109997    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:16.120543    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:16.120622    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:16.131644    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:16.131713    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:16.146390    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:16.146455    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:16.156116    4376 logs.go:276] 0 containers: []
	W0813 17:33:16.156131    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:16.156204    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:16.166637    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:16.166658    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:16.166663    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:16.181866    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:16.181876    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:16.196127    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:16.196139    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:16.208334    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:16.208346    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:16.249847    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:16.249858    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:16.263029    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:16.263040    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:16.275073    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:16.275087    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:16.286937    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:16.286947    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:16.304274    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:16.304284    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:16.316895    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:16.316905    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:16.339454    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:16.339467    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:16.343994    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:16.344000    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:16.355510    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:16.355519    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:16.393847    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:16.393858    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:16.433442    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:16.433452    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:16.451774    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:16.451786    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:16.463936    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:16.463949    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:18.071955    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:18.072125    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:18.086726    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:18.086815    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:18.098485    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:18.098554    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:18.109218    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:18.109307    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:18.119943    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:18.120018    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:18.130296    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:18.130367    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:18.140779    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:18.140853    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:18.150446    4162 logs.go:276] 0 containers: []
	W0813 17:33:18.150456    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:18.150516    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:18.160832    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:18.160849    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:18.160854    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:18.175260    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:18.175271    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:18.187210    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:18.187221    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:18.205266    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:18.205276    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:18.217592    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:18.217601    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:18.228857    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:18.228869    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:18.233187    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:18.233193    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:18.268298    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:18.268309    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:18.280239    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:18.280252    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:18.297176    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:18.297187    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:18.330265    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:18.330274    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:18.343841    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:18.343852    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:18.355695    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:18.355706    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:18.368671    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:18.368682    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:18.392153    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:18.392162    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:20.905759    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:18.977811    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:25.907874    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:25.908065    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:25.925156    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:25.925256    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:25.938118    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:25.938200    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:25.949990    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:25.950072    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:25.960227    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:25.960305    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:25.971229    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:25.971303    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:25.981488    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:25.981557    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:25.991480    4162 logs.go:276] 0 containers: []
	W0813 17:33:25.991496    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:25.991565    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:26.001994    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:26.002011    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:26.002016    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:26.014107    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:26.014118    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:26.025472    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:26.025482    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:26.036923    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:26.036933    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:26.041152    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:26.041158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:26.057523    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:26.057534    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:26.072976    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:26.072986    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:26.096487    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:26.096496    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:26.110847    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:26.110858    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:26.123115    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:26.123125    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:26.134967    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:26.134977    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:26.149734    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:26.149744    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:26.168150    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:26.168161    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:23.979940    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:23.980119    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:23.998377    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:23.998485    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:24.012045    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:24.012131    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:24.023799    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:24.023881    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:24.033990    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:24.034067    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:24.044466    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:24.044569    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:24.055718    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:24.055801    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:24.070013    4376 logs.go:276] 0 containers: []
	W0813 17:33:24.070024    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:24.070091    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:24.080556    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:24.080575    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:24.080580    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:24.118287    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:24.118296    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:24.122167    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:24.122174    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:24.134086    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:24.134097    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:24.151543    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:24.151554    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:24.186961    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:24.186973    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:24.200858    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:24.200868    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:24.214936    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:24.214948    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:24.226708    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:24.226719    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:24.237737    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:24.237749    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:24.259097    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:24.259107    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:24.298168    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:24.298182    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:24.317780    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:24.317790    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:24.330776    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:24.330789    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:24.342848    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:24.342859    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:24.354543    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:24.354555    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:24.367148    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:24.367158    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:26.891745    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:26.203788    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:26.203798    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:26.223334    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:26.223344    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:28.760900    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:31.893885    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:31.894035    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:31.905941    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:31.906032    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:31.916596    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:31.916675    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:31.927301    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:31.927378    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:31.937782    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:31.937859    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:31.948780    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:31.948860    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:31.959694    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:31.959758    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:31.970452    4376 logs.go:276] 0 containers: []
	W0813 17:33:31.970464    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:31.970527    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:31.981676    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:31.981692    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:31.981699    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:31.986236    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:31.986242    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:32.022836    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:32.022846    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:32.035093    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:32.035102    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:32.053353    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:32.053362    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:32.065015    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:32.065026    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:32.103954    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:32.103964    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:32.142103    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:32.142115    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:32.155982    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:32.155991    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:32.167484    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:32.167494    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:32.191092    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:32.191101    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:32.203087    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:32.203100    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:32.216980    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:32.216991    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:32.231867    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:32.231878    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:32.243958    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:32.243970    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:32.258927    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:32.258940    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:32.270461    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:32.270472    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:33.763085    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:33.763210    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:33.776038    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:33.776129    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:33.787443    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:33.787521    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:33.798175    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:33.798252    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:33.813850    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:33.813930    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:33.824367    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:33.824441    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:33.834944    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:33.835021    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:33.845658    4162 logs.go:276] 0 containers: []
	W0813 17:33:33.845669    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:33.845728    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:33.856285    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:33.856305    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:33.856310    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:33.860758    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:33.860764    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:33.896060    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:33.896071    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:33.909982    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:33.909993    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:33.922057    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:33.922067    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:33.933831    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:33.933841    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:33.945595    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:33.945604    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:33.957032    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:33.957043    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:33.968556    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:33.968567    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:33.989508    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:33.989519    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:34.006671    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:34.006681    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:34.030712    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:34.030719    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:34.064555    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:34.064564    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:34.076072    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:34.076083    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:34.087390    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:34.087402    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:34.784271    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:36.609029    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:39.786366    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:39.786499    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:39.810495    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:39.810615    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:39.837978    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:39.838052    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:39.864872    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:39.864943    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:39.875824    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:39.875900    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:39.889882    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:39.889961    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:39.901239    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:39.901325    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:39.911881    4376 logs.go:276] 0 containers: []
	W0813 17:33:39.911893    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:39.911959    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:39.922576    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:39.922594    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:39.922601    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:39.960406    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:39.960417    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:39.974610    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:39.974621    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:39.985716    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:39.985730    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:39.997348    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:39.997359    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:40.009434    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:40.009444    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:40.021449    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:40.021460    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:40.032828    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:40.032839    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:40.070048    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:40.070060    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:40.090240    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:40.090251    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:40.102066    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:40.102078    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:40.113953    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:40.113965    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:40.118034    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:40.118041    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:40.139064    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:40.139070    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:40.176235    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:40.176246    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:40.191273    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:40.191284    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:40.208510    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:40.208520    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:42.723128    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:41.609518    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:41.609752    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:41.635365    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:41.635502    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:41.652769    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:41.652863    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:41.668640    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:41.668731    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:41.684534    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:41.684606    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:41.708169    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:41.708246    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:41.721890    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:41.721982    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:41.741086    4162 logs.go:276] 0 containers: []
	W0813 17:33:41.741098    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:41.741162    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:41.757219    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:41.757236    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:41.757241    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:41.771729    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:41.771740    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:41.789784    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:41.789795    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:41.801801    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:41.801812    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:41.835239    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:41.835247    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:41.871005    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:41.871017    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:41.883564    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:41.883577    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:41.899269    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:41.899280    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:41.911192    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:41.911203    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:41.915981    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:41.915989    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:41.931995    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:41.932006    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:41.947384    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:41.947396    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:41.958834    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:41.958844    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:41.983843    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:41.983853    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:41.996327    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:41.996338    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:44.509981    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:47.725228    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:47.725425    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:47.743357    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:47.743443    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:47.754877    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:47.754949    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:47.765168    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:47.765241    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:47.777019    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:47.777104    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:47.791248    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:47.791322    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:47.801705    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:47.801781    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:47.812073    4376 logs.go:276] 0 containers: []
	W0813 17:33:47.812085    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:47.812159    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:47.825887    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:47.825906    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:47.825912    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:47.829946    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:47.829953    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:47.864459    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:47.864469    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:47.903034    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:47.903045    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:47.914677    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:47.914688    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:47.931925    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:47.931936    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:47.944448    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:47.944458    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:47.958946    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:47.958956    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:47.969947    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:47.969958    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:47.985213    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:47.985224    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:49.512285    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:49.512575    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:49.546610    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:49.546767    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:49.565175    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:49.565285    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:49.579602    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:49.579684    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:49.593408    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:49.593487    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:49.604351    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:49.604432    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:49.615377    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:49.615449    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:49.626120    4162 logs.go:276] 0 containers: []
	W0813 17:33:49.626132    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:49.626197    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:49.636316    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:49.636334    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:49.636339    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:49.673087    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:49.673102    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:49.687616    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:49.687626    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:49.705823    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:49.705834    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:49.710646    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:49.710652    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:49.726759    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:49.726771    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:49.744436    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:49.744447    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:49.755850    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:49.755860    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:49.779436    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:49.779449    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:49.792192    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:49.792205    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:33:49.809874    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:49.809885    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:49.824829    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:49.824841    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:49.859116    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:49.859125    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:49.876966    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:49.876976    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:49.888285    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:49.888296    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:48.000803    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:48.000817    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:48.012883    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:48.012894    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:48.024340    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:48.024351    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:48.036367    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:48.036378    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:48.074018    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:48.074028    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:48.088588    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:48.088600    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:48.100310    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:48.100321    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:50.625747    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:52.402010    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:55.628033    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:55.628108    4376 kubeadm.go:597] duration metric: took 4m4.184250458s to restartPrimaryControlPlane
	W0813 17:33:55.628184    4376 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0813 17:33:55.628220    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0813 17:33:56.677315    4376 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.049102042s)
	I0813 17:33:56.677376    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0813 17:33:56.682461    4376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 17:33:56.685158    4376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 17:33:56.688088    4376 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 17:33:56.688095    4376 kubeadm.go:157] found existing configuration files:
	
	I0813 17:33:56.688129    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/admin.conf
	I0813 17:33:56.690896    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0813 17:33:56.690926    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0813 17:33:56.693376    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/kubelet.conf
	I0813 17:33:56.696406    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0813 17:33:56.696438    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0813 17:33:56.699678    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/controller-manager.conf
	I0813 17:33:56.702268    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0813 17:33:56.702297    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 17:33:56.704842    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/scheduler.conf
	I0813 17:33:56.708031    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0813 17:33:56.708061    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 17:33:56.710905    4376 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0813 17:33:56.727383    4376 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0813 17:33:56.727413    4376 kubeadm.go:310] [preflight] Running pre-flight checks
	I0813 17:33:56.778312    4376 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 17:33:56.778398    4376 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 17:33:56.778481    4376 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0813 17:33:56.831640    4376 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 17:33:56.839779    4376 out.go:204]   - Generating certificates and keys ...
	I0813 17:33:56.839813    4376 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0813 17:33:56.839843    4376 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0813 17:33:56.839884    4376 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0813 17:33:56.839923    4376 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0813 17:33:56.839969    4376 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0813 17:33:56.839995    4376 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0813 17:33:56.840028    4376 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0813 17:33:56.840065    4376 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0813 17:33:56.840106    4376 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0813 17:33:56.840147    4376 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0813 17:33:56.840168    4376 kubeadm.go:310] [certs] Using the existing "sa" key
	I0813 17:33:56.840197    4376 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 17:33:56.895041    4376 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 17:33:56.990784    4376 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 17:33:57.129741    4376 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 17:33:57.295956    4376 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 17:33:57.324702    4376 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 17:33:57.325067    4376 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 17:33:57.325098    4376 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0813 17:33:57.415738    4376 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 17:33:57.423403    4376 out.go:204]   - Booting up control plane ...
	I0813 17:33:57.423489    4376 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 17:33:57.423536    4376 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 17:33:57.423594    4376 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 17:33:57.423641    4376 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 17:33:57.423770    4376 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0813 17:33:57.404508    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:57.404628    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:57.416603    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:33:57.416680    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:57.427948    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:33:57.428026    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:57.439425    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:33:57.439506    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:57.450183    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:33:57.450255    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:57.460763    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:33:57.460838    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:57.471246    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:33:57.471318    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:57.481378    4162 logs.go:276] 0 containers: []
	W0813 17:33:57.481391    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:57.481449    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:57.492164    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:33:57.492181    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:33:57.492186    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:33:57.504412    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:33:57.504423    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:57.516878    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:33:57.516889    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:33:57.528968    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:33:57.528979    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:33:57.540741    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:33:57.540752    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:33:57.558192    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:57.558202    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:57.594290    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:57.594302    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:57.599425    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:57.599432    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:57.622919    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:33:57.622926    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:33:57.635364    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:33:57.635374    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:33:57.647377    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:33:57.647387    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:33:57.661242    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:33:57.661253    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:33:57.676873    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:57.676884    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:57.713355    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:33:57.713369    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:33:57.728110    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:33:57.728121    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:00.253206    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:01.922353    4376 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503384 seconds
	I0813 17:34:01.922428    4376 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 17:34:01.925605    4376 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 17:34:02.433038    4376 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0813 17:34:02.433137    4376 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-967000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 17:34:02.936461    4376 kubeadm.go:310] [bootstrap-token] Using token: 3nwyfi.oaah5rc09050qhhe
	I0813 17:34:02.937973    4376 out.go:204]   - Configuring RBAC rules ...
	I0813 17:34:02.938120    4376 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 17:34:02.938410    4376 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 17:34:02.945668    4376 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 17:34:02.946933    4376 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 17:34:02.947848    4376 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 17:34:02.948743    4376 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 17:34:02.951716    4376 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 17:34:03.108341    4376 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0813 17:34:03.340848    4376 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0813 17:34:03.341213    4376 kubeadm.go:310] 
	I0813 17:34:03.341247    4376 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0813 17:34:03.341250    4376 kubeadm.go:310] 
	I0813 17:34:03.341291    4376 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0813 17:34:03.341294    4376 kubeadm.go:310] 
	I0813 17:34:03.341306    4376 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0813 17:34:03.341348    4376 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 17:34:03.341374    4376 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 17:34:03.341377    4376 kubeadm.go:310] 
	I0813 17:34:03.341409    4376 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0813 17:34:03.341413    4376 kubeadm.go:310] 
	I0813 17:34:03.341434    4376 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 17:34:03.341446    4376 kubeadm.go:310] 
	I0813 17:34:03.341473    4376 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0813 17:34:03.341515    4376 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 17:34:03.341561    4376 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 17:34:03.341564    4376 kubeadm.go:310] 
	I0813 17:34:03.341604    4376 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0813 17:34:03.341650    4376 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0813 17:34:03.341654    4376 kubeadm.go:310] 
	I0813 17:34:03.341706    4376 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3nwyfi.oaah5rc09050qhhe \
	I0813 17:34:03.341760    4376 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:94a653d9144e0f51dbf8cb0881c67d995fb93f16972a5a4e4bd9f3c8d4a5aa34 \
	I0813 17:34:03.341774    4376 kubeadm.go:310] 	--control-plane 
	I0813 17:34:03.341777    4376 kubeadm.go:310] 
	I0813 17:34:03.341829    4376 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0813 17:34:03.341837    4376 kubeadm.go:310] 
	I0813 17:34:03.341902    4376 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3nwyfi.oaah5rc09050qhhe \
	I0813 17:34:03.341961    4376 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:94a653d9144e0f51dbf8cb0881c67d995fb93f16972a5a4e4bd9f3c8d4a5aa34 
	I0813 17:34:03.342146    4376 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 17:34:03.342158    4376 cni.go:84] Creating CNI manager for ""
	I0813 17:34:03.342166    4376 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:34:03.346058    4376 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 17:34:03.354229    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0813 17:34:03.357535    4376 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0813 17:34:03.362335    4376 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 17:34:03.362384    4376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 17:34:03.362401    4376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-967000 minikube.k8s.io/updated_at=2024_08_13T17_34_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=stopped-upgrade-967000 minikube.k8s.io/primary=true
	I0813 17:34:03.405361    4376 ops.go:34] apiserver oom_adj: -16
	I0813 17:34:03.405402    4376 kubeadm.go:1113] duration metric: took 43.0635ms to wait for elevateKubeSystemPrivileges
	I0813 17:34:03.405469    4376 kubeadm.go:394] duration metric: took 4m11.97615075s to StartCluster
	I0813 17:34:03.405480    4376 settings.go:142] acquiring lock: {Name:mkaf11e998595d0fbc8bedb0051c4325b4dc127d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:34:03.405567    4376 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:34:03.405973    4376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/kubeconfig: {Name:mk4f6a628d9f9f6550ed229faba2a879ed685a75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:34:03.406160    4376 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:34:03.406166    4376 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0813 17:34:03.406205    4376 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-967000"
	I0813 17:34:03.406217    4376 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-967000"
	W0813 17:34:03.406228    4376 addons.go:243] addon storage-provisioner should already be in state true
	I0813 17:34:03.406237    4376 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:34:03.406238    4376 host.go:66] Checking if "stopped-upgrade-967000" exists ...
	I0813 17:34:03.406267    4376 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-967000"
	I0813 17:34:03.406279    4376 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-967000"
	I0813 17:34:03.407376    4376 kapi.go:59] client config for stopped-upgrade-967000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/client.key", CAFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105da7e30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 17:34:03.407491    4376 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-967000"
	W0813 17:34:03.407495    4376 addons.go:243] addon default-storageclass should already be in state true
	I0813 17:34:03.407501    4376 host.go:66] Checking if "stopped-upgrade-967000" exists ...
	I0813 17:34:03.410183    4376 out.go:177] * Verifying Kubernetes components...
	I0813 17:34:03.410621    4376 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 17:34:03.414408    4376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 17:34:03.414415    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	I0813 17:34:03.418185    4376 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:34:05.255339    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:05.255461    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:34:05.267517    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:34:05.267594    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:34:05.279133    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:34:05.279204    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:34:05.290468    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:34:05.290540    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:34:05.300954    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:34:05.301037    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:34:05.311623    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:34:05.311708    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:34:05.323055    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:34:05.323129    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:34:05.333788    4162 logs.go:276] 0 containers: []
	W0813 17:34:05.333800    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:34:05.333862    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:34:05.347016    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:34:05.347034    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:34:05.347039    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:34:05.358819    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:34:05.358830    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:34:05.393417    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:34:05.393427    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:34:05.405574    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:34:05.405585    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:34:05.430346    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:34:05.430355    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:34:05.464178    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:34:05.464190    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:34:05.475967    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:34:05.475978    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:34:05.487839    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:34:05.487851    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:34:05.503076    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:34:05.503086    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:34:05.520451    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:34:05.520464    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:34:05.536894    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:34:05.536905    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:34:05.541432    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:34:05.541440    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:34:05.555579    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:34:05.555589    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:34:05.567603    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:34:05.567614    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:05.582173    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:34:05.582184    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:34:03.422232    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:34:03.426092    4376 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 17:34:03.426100    4376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 17:34:03.426107    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	I0813 17:34:03.518780    4376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 17:34:03.524255    4376 api_server.go:52] waiting for apiserver process to appear ...
	I0813 17:34:03.524327    4376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:34:03.531353    4376 api_server.go:72] duration metric: took 125.182083ms to wait for apiserver process to appear ...
	I0813 17:34:03.531366    4376 api_server.go:88] waiting for apiserver healthz status ...
	I0813 17:34:03.531375    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:03.535827    4376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 17:34:03.544666    4376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 17:34:03.890896    4376 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0813 17:34:03.890908    4376 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0813 17:34:08.104392    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:08.533356    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:08.533376    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:13.106507    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:13.106634    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:34:13.117900    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:34:13.117991    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:34:13.129149    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:34:13.129233    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:34:13.140102    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:34:13.140184    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:34:13.150775    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:34:13.150848    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:34:13.161325    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:34:13.161401    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:34:13.172193    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:34:13.172264    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:34:13.182729    4162 logs.go:276] 0 containers: []
	W0813 17:34:13.182742    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:34:13.182819    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:34:13.194883    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:34:13.194901    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:34:13.194907    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:34:13.207003    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:34:13.207014    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:34:13.224406    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:34:13.224415    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:34:13.249884    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:34:13.249899    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:34:13.285545    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:34:13.285559    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:34:13.300001    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:34:13.300014    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:34:13.312095    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:34:13.312109    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:34:13.324351    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:34:13.324362    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:34:13.336086    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:34:13.336096    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:34:13.347930    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:34:13.347940    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:34:13.383039    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:34:13.383049    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:34:13.388082    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:34:13.388091    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:13.402484    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:34:13.402495    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:34:13.417863    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:34:13.417873    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:34:13.429062    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:34:13.429073    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:34:15.942730    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:13.533913    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:13.533932    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:20.942829    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:20.942923    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:34:20.953444    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:34:20.953512    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:34:20.964097    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:34:20.964180    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:34:20.975364    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:34:20.975434    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:34:20.986691    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:34:20.986771    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:34:20.997277    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:34:20.997352    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:34:21.008142    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:34:21.008219    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:34:21.018189    4162 logs.go:276] 0 containers: []
	W0813 17:34:21.018199    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:34:21.018255    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:34:21.028964    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:34:21.028979    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:34:21.028984    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:34:21.033625    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:34:21.033633    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:34:21.048348    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:34:21.048358    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:21.062518    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:34:21.062528    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:34:21.074437    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:34:21.074448    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:34:21.086469    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:34:21.086481    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:34:21.098237    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:34:21.098248    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:34:21.112673    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:34:21.112684    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:34:21.124187    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:34:21.124198    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:34:21.135768    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:34:21.135780    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:34:21.171168    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:34:21.171180    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:34:18.534453    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:18.534474    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:21.189181    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:34:21.189192    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:34:21.200517    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:34:21.200530    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:34:21.235238    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:34:21.235255    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:34:21.246826    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:34:21.246838    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:34:23.771062    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:23.535072    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:23.535099    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:28.773285    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:28.773444    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:34:28.789793    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:34:28.789891    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:34:28.802966    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:34:28.803038    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:34:28.814757    4162 logs.go:276] 4 containers: [edc79ce83d8a 7e4d0301e234 f436aa55d977 538bf00465c8]
	I0813 17:34:28.814828    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:34:28.825058    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:34:28.825135    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:34:28.835376    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:34:28.835447    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:34:28.845860    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:34:28.845926    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:34:28.855548    4162 logs.go:276] 0 containers: []
	W0813 17:34:28.855558    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:34:28.855612    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:34:28.866034    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:34:28.866053    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:34:28.866059    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:34:28.900716    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:34:28.900728    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:34:28.912441    4162 logs.go:123] Gathering logs for coredns [f436aa55d977] ...
	I0813 17:34:28.912451    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f436aa55d977"
	I0813 17:34:28.924487    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:34:28.924501    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:34:28.937085    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:34:28.937100    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:34:28.948351    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:34:28.948362    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:34:28.962558    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:34:28.962569    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:34:28.984177    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:34:28.984187    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:34:28.999652    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:34:28.999662    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:34:29.011231    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:34:29.011243    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:34:29.047034    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:34:29.047045    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:34:29.065363    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:34:29.065372    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:34:29.069867    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:34:29.069875    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:29.085530    4162 logs.go:123] Gathering logs for coredns [538bf00465c8] ...
	I0813 17:34:29.085541    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538bf00465c8"
	I0813 17:34:29.102502    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:34:29.102513    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:34:28.535996    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:28.536021    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:33.536908    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:33.536944    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0813 17:34:33.892809    4376 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0813 17:34:33.897222    4376 out.go:177] * Enabled addons: storage-provisioner
	I0813 17:34:31.628902    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:33.904037    4376 addons.go:510] duration metric: took 30.498397875s for enable addons: enabled=[storage-provisioner]
	I0813 17:34:36.631021    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:36.631123    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:34:36.645846    4162 logs.go:276] 1 containers: [b41e40ee25c7]
	I0813 17:34:36.645933    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:34:36.656874    4162 logs.go:276] 1 containers: [e08b09b94f46]
	I0813 17:34:36.656963    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:34:36.668602    4162 logs.go:276] 4 containers: [48dc668317e4 ebb5807747c3 edc79ce83d8a 7e4d0301e234]
	I0813 17:34:36.668680    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:34:36.678885    4162 logs.go:276] 1 containers: [c33e686df7f8]
	I0813 17:34:36.678959    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:34:36.689447    4162 logs.go:276] 1 containers: [57a9d727e009]
	I0813 17:34:36.689527    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:34:36.699707    4162 logs.go:276] 1 containers: [79bcddc9f413]
	I0813 17:34:36.699778    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:34:36.710127    4162 logs.go:276] 0 containers: []
	W0813 17:34:36.710138    4162 logs.go:278] No container was found matching "kindnet"
	I0813 17:34:36.710200    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:34:36.720686    4162 logs.go:276] 1 containers: [701c525892f6]
	I0813 17:34:36.720703    4162 logs.go:123] Gathering logs for container status ...
	I0813 17:34:36.720709    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:34:36.732487    4162 logs.go:123] Gathering logs for coredns [edc79ce83d8a] ...
	I0813 17:34:36.732497    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edc79ce83d8a"
	I0813 17:34:36.744556    4162 logs.go:123] Gathering logs for kube-apiserver [b41e40ee25c7] ...
	I0813 17:34:36.744568    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b41e40ee25c7"
	I0813 17:34:36.758729    4162 logs.go:123] Gathering logs for kube-proxy [57a9d727e009] ...
	I0813 17:34:36.758739    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57a9d727e009"
	I0813 17:34:36.770330    4162 logs.go:123] Gathering logs for storage-provisioner [701c525892f6] ...
	I0813 17:34:36.770340    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701c525892f6"
	I0813 17:34:36.782058    4162 logs.go:123] Gathering logs for Docker ...
	I0813 17:34:36.782069    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:34:36.806655    4162 logs.go:123] Gathering logs for kubelet ...
	I0813 17:34:36.806665    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:34:36.841611    4162 logs.go:123] Gathering logs for etcd [e08b09b94f46] ...
	I0813 17:34:36.841620    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e08b09b94f46"
	I0813 17:34:36.855672    4162 logs.go:123] Gathering logs for coredns [ebb5807747c3] ...
	I0813 17:34:36.855682    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb5807747c3"
	I0813 17:34:36.867462    4162 logs.go:123] Gathering logs for dmesg ...
	I0813 17:34:36.867476    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:34:36.872256    4162 logs.go:123] Gathering logs for coredns [48dc668317e4] ...
	I0813 17:34:36.872263    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48dc668317e4"
	I0813 17:34:36.883553    4162 logs.go:123] Gathering logs for coredns [7e4d0301e234] ...
	I0813 17:34:36.883563    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e4d0301e234"
	I0813 17:34:36.894845    4162 logs.go:123] Gathering logs for kube-scheduler [c33e686df7f8] ...
	I0813 17:34:36.894855    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33e686df7f8"
	I0813 17:34:36.909764    4162 logs.go:123] Gathering logs for kube-controller-manager [79bcddc9f413] ...
	I0813 17:34:36.909773    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79bcddc9f413"
	I0813 17:34:36.927093    4162 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:34:36.927102    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:34:39.464956    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:38.538147    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:38.538188    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:44.467144    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:44.471733    4162 out.go:177] 
	W0813 17:34:44.475604    4162 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0813 17:34:44.475616    4162 out.go:239] * 
	W0813 17:34:44.476315    4162 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:34:44.487603    4162 out.go:177] 
	I0813 17:34:43.539729    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:43.539751    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:48.540684    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:48.540730    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:53.541543    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:53.541585    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-08-14 00:25:37 UTC, ends at Wed 2024-08-14 00:35:00 UTC. --
	Aug 14 00:34:36 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:36Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 14 00:34:41 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:41Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 14 00:34:45 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:45Z" level=error msg="ContainerStats resp: {0x4000803140 linux}"
	Aug 14 00:34:45 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:45Z" level=error msg="ContainerStats resp: {0x4000803840 linux}"
	Aug 14 00:34:46 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:46Z" level=error msg="ContainerStats resp: {0x40007cae40 linux}"
	Aug 14 00:34:46 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:46Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 14 00:34:47 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:47Z" level=error msg="ContainerStats resp: {0x4000697180 linux}"
	Aug 14 00:34:47 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:47Z" level=error msg="ContainerStats resp: {0x40006974c0 linux}"
	Aug 14 00:34:47 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:47Z" level=error msg="ContainerStats resp: {0x4000697980 linux}"
	Aug 14 00:34:47 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:47Z" level=error msg="ContainerStats resp: {0x4000697ac0 linux}"
	Aug 14 00:34:47 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:47Z" level=error msg="ContainerStats resp: {0x4000355240 linux}"
	Aug 14 00:34:47 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:47Z" level=error msg="ContainerStats resp: {0x40003558c0 linux}"
	Aug 14 00:34:47 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:47Z" level=error msg="ContainerStats resp: {0x40008aaa00 linux}"
	Aug 14 00:34:51 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 14 00:34:56 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 14 00:34:57 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:57Z" level=error msg="ContainerStats resp: {0x40007f11c0 linux}"
	Aug 14 00:34:57 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:57Z" level=error msg="ContainerStats resp: {0x400087c3c0 linux}"
	Aug 14 00:34:58 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:58Z" level=error msg="ContainerStats resp: {0x400087d2c0 linux}"
	Aug 14 00:34:59 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:59Z" level=error msg="ContainerStats resp: {0x400087dc80 linux}"
	Aug 14 00:34:59 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:59Z" level=error msg="ContainerStats resp: {0x4000696c00 linux}"
	Aug 14 00:34:59 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:59Z" level=error msg="ContainerStats resp: {0x4000697400 linux}"
	Aug 14 00:34:59 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:59Z" level=error msg="ContainerStats resp: {0x40007ca800 linux}"
	Aug 14 00:34:59 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:59Z" level=error msg="ContainerStats resp: {0x40007ca940 linux}"
	Aug 14 00:34:59 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:59Z" level=error msg="ContainerStats resp: {0x4000484980 linux}"
	Aug 14 00:34:59 running-upgrade-126000 cri-dockerd[3171]: time="2024-08-14T00:34:59Z" level=error msg="ContainerStats resp: {0x40007cb700 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	48dc668317e47       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   8ccfdf1bbff8e
	ebb5807747c32       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   475ab32a6b181
	edc79ce83d8a2       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8ccfdf1bbff8e
	7e4d0301e234d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   475ab32a6b181
	57a9d727e0094       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   afedd5ae71246
	701c525892f6d       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   592d803a25e83
	e08b09b94f46c       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   72fa793014500
	79bcddc9f4134       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   47b98d8ced578
	c33e686df7f8f       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   b91a924bcdc99
	b41e40ee25c79       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   f376eb3cb45d7
	
	
	==> coredns [48dc668317e4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8634267933226048046.1856990932420635279. HINFO: read udp 10.244.0.2:53929->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8634267933226048046.1856990932420635279. HINFO: read udp 10.244.0.2:41932->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8634267933226048046.1856990932420635279. HINFO: read udp 10.244.0.2:36961->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8634267933226048046.1856990932420635279. HINFO: read udp 10.244.0.2:43991->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8634267933226048046.1856990932420635279. HINFO: read udp 10.244.0.2:50681->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8634267933226048046.1856990932420635279. HINFO: read udp 10.244.0.2:54909->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8634267933226048046.1856990932420635279. HINFO: read udp 10.244.0.2:47294->10.0.2.3:53: i/o timeout
	
	
	==> coredns [7e4d0301e234] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6666133645693524132.1603899005626896053. HINFO: read udp 10.244.0.3:45993->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6666133645693524132.1603899005626896053. HINFO: read udp 10.244.0.3:48074->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6666133645693524132.1603899005626896053. HINFO: read udp 10.244.0.3:52198->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6666133645693524132.1603899005626896053. HINFO: read udp 10.244.0.3:41509->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6666133645693524132.1603899005626896053. HINFO: read udp 10.244.0.3:35845->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6666133645693524132.1603899005626896053. HINFO: read udp 10.244.0.3:37711->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6666133645693524132.1603899005626896053. HINFO: read udp 10.244.0.3:48260->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6666133645693524132.1603899005626896053. HINFO: read udp 10.244.0.3:44141->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6666133645693524132.1603899005626896053. HINFO: read udp 10.244.0.3:57607->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6666133645693524132.1603899005626896053. HINFO: read udp 10.244.0.3:58775->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ebb5807747c3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3587341835955972380.5731356044325699313. HINFO: read udp 10.244.0.3:33805->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3587341835955972380.5731356044325699313. HINFO: read udp 10.244.0.3:37888->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3587341835955972380.5731356044325699313. HINFO: read udp 10.244.0.3:39510->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3587341835955972380.5731356044325699313. HINFO: read udp 10.244.0.3:44844->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3587341835955972380.5731356044325699313. HINFO: read udp 10.244.0.3:52965->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3587341835955972380.5731356044325699313. HINFO: read udp 10.244.0.3:41653->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3587341835955972380.5731356044325699313. HINFO: read udp 10.244.0.3:34510->10.0.2.3:53: i/o timeout
	
	
	==> coredns [edc79ce83d8a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8845343133458088884.2611317013573094358. HINFO: read udp 10.244.0.2:49346->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8845343133458088884.2611317013573094358. HINFO: read udp 10.244.0.2:46025->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8845343133458088884.2611317013573094358. HINFO: read udp 10.244.0.2:41599->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8845343133458088884.2611317013573094358. HINFO: read udp 10.244.0.2:44146->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8845343133458088884.2611317013573094358. HINFO: read udp 10.244.0.2:51668->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8845343133458088884.2611317013573094358. HINFO: read udp 10.244.0.2:53801->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8845343133458088884.2611317013573094358. HINFO: read udp 10.244.0.2:58957->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8845343133458088884.2611317013573094358. HINFO: read udp 10.244.0.2:34135->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8845343133458088884.2611317013573094358. HINFO: read udp 10.244.0.2:49120->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8845343133458088884.2611317013573094358. HINFO: read udp 10.244.0.2:59654->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-126000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-126000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=running-upgrade-126000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_13T17_30_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:30:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-126000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:34:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:30:43 +0000   Wed, 14 Aug 2024 00:30:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:30:43 +0000   Wed, 14 Aug 2024 00:30:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:30:43 +0000   Wed, 14 Aug 2024 00:30:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:30:43 +0000   Wed, 14 Aug 2024 00:30:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-126000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 19357efa8c584aa4a66133a5cbaca896
	  System UUID:                19357efa8c584aa4a66133a5cbaca896
	  Boot ID:                    d5c3fe8c-dd5c-4dd8-90ad-414cac3ed5dd
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-j67n6                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-rqgsz                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-126000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-126000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-126000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-dhcnz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-126000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-126000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-126000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-126000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-126000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-126000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-126000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-126000 status is now: NodeReady
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-126000 event: Registered Node running-upgrade-126000 in Controller
	
	
	==> dmesg <==
	[  +0.076827] systemd-fstab-generator[849]: Ignoring "noauto" for root device
	[  +0.080626] systemd-fstab-generator[860]: Ignoring "noauto" for root device
	[  +1.136412] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.079298] systemd-fstab-generator[1010]: Ignoring "noauto" for root device
	[  +0.073174] systemd-fstab-generator[1021]: Ignoring "noauto" for root device
	[Aug14 00:26] systemd-fstab-generator[1302]: Ignoring "noauto" for root device
	[  +0.319530] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.823642] systemd-fstab-generator[1934]: Ignoring "noauto" for root device
	[  +2.663649] systemd-fstab-generator[2213]: Ignoring "noauto" for root device
	[  +0.146126] systemd-fstab-generator[2247]: Ignoring "noauto" for root device
	[  +0.096830] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[  +0.094761] systemd-fstab-generator[2273]: Ignoring "noauto" for root device
	[  +2.814385] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.205582] systemd-fstab-generator[3125]: Ignoring "noauto" for root device
	[  +0.098753] systemd-fstab-generator[3139]: Ignoring "noauto" for root device
	[  +0.086965] systemd-fstab-generator[3150]: Ignoring "noauto" for root device
	[  +0.089781] systemd-fstab-generator[3164]: Ignoring "noauto" for root device
	[  +2.266806] systemd-fstab-generator[3318]: Ignoring "noauto" for root device
	[  +2.576228] systemd-fstab-generator[3826]: Ignoring "noauto" for root device
	[  +1.070338] systemd-fstab-generator[4088]: Ignoring "noauto" for root device
	[ +18.608686] kauditd_printk_skb: 68 callbacks suppressed
	[Aug14 00:30] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.465505] systemd-fstab-generator[12233]: Ignoring "noauto" for root device
	[  +6.138986] systemd-fstab-generator[12848]: Ignoring "noauto" for root device
	[  +0.482657] systemd-fstab-generator[12981]: Ignoring "noauto" for root device
	
	
	==> etcd [e08b09b94f46] <==
	{"level":"info","ts":"2024-08-14T00:30:38.683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-14T00:30:38.683Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-14T00:30:38.694Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T00:30:38.697Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T00:30:38.698Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-14T00:30:38.698Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-14T00:30:38.698Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T00:30:39.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-14T00:30:39.663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-14T00:30:39.663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-14T00:30:39.663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-14T00:30:39.663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-14T00:30:39.663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-14T00:30:39.663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-14T00:30:39.663Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:30:39.666Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:30:39.666Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:30:39.666Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:30:39.666Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-126000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T00:30:39.667Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:30:39.667Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:30:39.669Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-14T00:30:39.675Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T00:30:39.675Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T00:30:39.675Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:35:00 up 9 min,  0 users,  load average: 0.27, 0.32, 0.19
	Linux running-upgrade-126000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [b41e40ee25c7] <==
	I0814 00:30:40.887125       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0814 00:30:40.898491       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0814 00:30:40.913224       1 cache.go:39] Caches are synced for autoregister controller
	I0814 00:30:40.914841       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0814 00:30:40.914865       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 00:30:40.916136       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0814 00:30:40.916153       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0814 00:30:41.633728       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0814 00:30:41.819391       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0814 00:30:41.821786       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0814 00:30:41.821804       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 00:30:41.964253       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 00:30:41.974466       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 00:30:42.062540       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0814 00:30:42.064523       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0814 00:30:42.064905       1 controller.go:611] quota admission added evaluator for: endpoints
	I0814 00:30:42.066273       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 00:30:42.949259       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0814 00:30:43.548762       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0814 00:30:43.551899       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0814 00:30:43.586642       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0814 00:30:43.603170       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 00:30:56.721814       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0814 00:30:56.871619       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0814 00:30:57.258691       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [79bcddc9f413] <==
	I0814 00:30:56.123759       1 shared_informer.go:262] Caches are synced for deployment
	I0814 00:30:56.123771       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0814 00:30:56.123929       1 shared_informer.go:262] Caches are synced for TTL
	I0814 00:30:56.124040       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0814 00:30:56.123767       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0814 00:30:56.125875       1 shared_informer.go:262] Caches are synced for service account
	I0814 00:30:56.192237       1 shared_informer.go:262] Caches are synced for disruption
	I0814 00:30:56.192246       1 disruption.go:371] Sending events to api server.
	I0814 00:30:56.269081       1 shared_informer.go:262] Caches are synced for taint
	I0814 00:30:56.269141       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0814 00:30:56.269174       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-126000. Assuming now as a timestamp.
	I0814 00:30:56.269224       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0814 00:30:56.269295       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0814 00:30:56.269354       1 event.go:294] "Event occurred" object="running-upgrade-126000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-126000 event: Registered Node running-upgrade-126000 in Controller"
	I0814 00:30:56.304031       1 shared_informer.go:262] Caches are synced for resource quota
	I0814 00:30:56.321272       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0814 00:30:56.321318       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0814 00:30:56.326213       1 shared_informer.go:262] Caches are synced for resource quota
	I0814 00:30:56.725094       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dhcnz"
	I0814 00:30:56.740671       1 shared_informer.go:262] Caches are synced for garbage collector
	I0814 00:30:56.769272       1 shared_informer.go:262] Caches are synced for garbage collector
	I0814 00:30:56.769282       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0814 00:30:56.872989       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0814 00:30:57.124758       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-rqgsz"
	I0814 00:30:57.129683       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-j67n6"
	
	
	==> kube-proxy [57a9d727e009] <==
	I0814 00:30:57.247457       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0814 00:30:57.247482       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0814 00:30:57.247492       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0814 00:30:57.256483       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0814 00:30:57.256494       1 server_others.go:206] "Using iptables Proxier"
	I0814 00:30:57.256508       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0814 00:30:57.256598       1 server.go:661] "Version info" version="v1.24.1"
	I0814 00:30:57.256602       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:30:57.256830       1 config.go:317] "Starting service config controller"
	I0814 00:30:57.256836       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0814 00:30:57.256844       1 config.go:226] "Starting endpoint slice config controller"
	I0814 00:30:57.256845       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0814 00:30:57.257649       1 config.go:444] "Starting node config controller"
	I0814 00:30:57.257664       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0814 00:30:57.357197       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0814 00:30:57.357221       1 shared_informer.go:262] Caches are synced for service config
	I0814 00:30:57.357731       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [c33e686df7f8] <==
	W0814 00:30:40.859047       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 00:30:40.859070       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0814 00:30:40.859147       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 00:30:40.859175       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0814 00:30:40.859221       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 00:30:40.859336       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0814 00:30:40.859260       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 00:30:40.859377       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0814 00:30:40.859271       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 00:30:40.859432       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0814 00:30:40.859283       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 00:30:40.859480       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0814 00:30:40.859306       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 00:30:40.859515       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0814 00:30:40.859324       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 00:30:40.859562       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0814 00:30:41.680731       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 00:30:41.680751       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0814 00:30:41.753656       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 00:30:41.753702       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0814 00:30:41.826636       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 00:30:41.826665       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0814 00:30:41.923783       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 00:30:41.923800       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0814 00:30:42.354979       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-08-14 00:25:37 UTC, ends at Wed 2024-08-14 00:35:01 UTC. --
	Aug 14 00:30:44 running-upgrade-126000 kubelet[12854]: I0814 00:30:44.811549   12854 reconciler.go:157] "Reconciler: start to sync state"
	Aug 14 00:30:45 running-upgrade-126000 kubelet[12854]: E0814 00:30:45.186753   12854 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-126000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-126000"
	Aug 14 00:30:45 running-upgrade-126000 kubelet[12854]: E0814 00:30:45.385152   12854 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-126000\" already exists" pod="kube-system/etcd-running-upgrade-126000"
	Aug 14 00:30:45 running-upgrade-126000 kubelet[12854]: E0814 00:30:45.582392   12854 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-126000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-126000"
	Aug 14 00:30:45 running-upgrade-126000 kubelet[12854]: I0814 00:30:45.775731   12854 request.go:601] Waited for 1.123047612s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 14 00:30:45 running-upgrade-126000 kubelet[12854]: E0814 00:30:45.780335   12854 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-126000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-126000"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.166675   12854 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.167003   12854 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.274493   12854 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.468930   12854 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aa0ddc83-3e64-46cc-bde3-d790abb0cb10-tmp\") pod \"storage-provisioner\" (UID: \"aa0ddc83-3e64-46cc-bde3-d790abb0cb10\") " pod="kube-system/storage-provisioner"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.468962   12854 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2dgx\" (UniqueName: \"kubernetes.io/projected/aa0ddc83-3e64-46cc-bde3-d790abb0cb10-kube-api-access-r2dgx\") pod \"storage-provisioner\" (UID: \"aa0ddc83-3e64-46cc-bde3-d790abb0cb10\") " pod="kube-system/storage-provisioner"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.711221   12854 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="592d803a25e8351c2c815065a78259c3b4ea5fd7450bb219091245ccadab0fb7"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.729993   12854 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.880117   12854 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a9539af-5b51-4ba8-9200-ab2b02440e1d-xtables-lock\") pod \"kube-proxy-dhcnz\" (UID: \"3a9539af-5b51-4ba8-9200-ab2b02440e1d\") " pod="kube-system/kube-proxy-dhcnz"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.880140   12854 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a9539af-5b51-4ba8-9200-ab2b02440e1d-lib-modules\") pod \"kube-proxy-dhcnz\" (UID: \"3a9539af-5b51-4ba8-9200-ab2b02440e1d\") " pod="kube-system/kube-proxy-dhcnz"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.880153   12854 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgcd6\" (UniqueName: \"kubernetes.io/projected/3a9539af-5b51-4ba8-9200-ab2b02440e1d-kube-api-access-hgcd6\") pod \"kube-proxy-dhcnz\" (UID: \"3a9539af-5b51-4ba8-9200-ab2b02440e1d\") " pod="kube-system/kube-proxy-dhcnz"
	Aug 14 00:30:56 running-upgrade-126000 kubelet[12854]: I0814 00:30:56.880163   12854 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a9539af-5b51-4ba8-9200-ab2b02440e1d-kube-proxy\") pod \"kube-proxy-dhcnz\" (UID: \"3a9539af-5b51-4ba8-9200-ab2b02440e1d\") " pod="kube-system/kube-proxy-dhcnz"
	Aug 14 00:30:57 running-upgrade-126000 kubelet[12854]: I0814 00:30:57.125832   12854 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 00:30:57 running-upgrade-126000 kubelet[12854]: I0814 00:30:57.135852   12854 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 00:30:57 running-upgrade-126000 kubelet[12854]: I0814 00:30:57.182347   12854 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5gd\" (UniqueName: \"kubernetes.io/projected/cef82598-b801-4276-9be5-66e00bea6110-kube-api-access-fz5gd\") pod \"coredns-6d4b75cb6d-rqgsz\" (UID: \"cef82598-b801-4276-9be5-66e00bea6110\") " pod="kube-system/coredns-6d4b75cb6d-rqgsz"
	Aug 14 00:30:57 running-upgrade-126000 kubelet[12854]: I0814 00:30:57.182368   12854 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9626a8ad-f7ed-4350-86c4-73db3af478d0-config-volume\") pod \"coredns-6d4b75cb6d-j67n6\" (UID: \"9626a8ad-f7ed-4350-86c4-73db3af478d0\") " pod="kube-system/coredns-6d4b75cb6d-j67n6"
	Aug 14 00:30:57 running-upgrade-126000 kubelet[12854]: I0814 00:30:57.182417   12854 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9qf8\" (UniqueName: \"kubernetes.io/projected/9626a8ad-f7ed-4350-86c4-73db3af478d0-kube-api-access-m9qf8\") pod \"coredns-6d4b75cb6d-j67n6\" (UID: \"9626a8ad-f7ed-4350-86c4-73db3af478d0\") " pod="kube-system/coredns-6d4b75cb6d-j67n6"
	Aug 14 00:30:57 running-upgrade-126000 kubelet[12854]: I0814 00:30:57.182432   12854 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cef82598-b801-4276-9be5-66e00bea6110-config-volume\") pod \"coredns-6d4b75cb6d-rqgsz\" (UID: \"cef82598-b801-4276-9be5-66e00bea6110\") " pod="kube-system/coredns-6d4b75cb6d-rqgsz"
	Aug 14 00:34:36 running-upgrade-126000 kubelet[12854]: I0814 00:34:36.035556   12854 scope.go:110] "RemoveContainer" containerID="538bf00465c8cb536d4a9a9f3b9dd3e906739db10741ae152f13d3cc5d3e05a1"
	Aug 14 00:34:36 running-upgrade-126000 kubelet[12854]: I0814 00:34:36.058068   12854 scope.go:110] "RemoveContainer" containerID="f436aa55d9771f4f7326c923755e0aec86d6928e1507432c62920ecb6bd1b349"
	
	
	==> storage-provisioner [701c525892f6] <==
	I0814 00:30:56.780648       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 00:30:56.784698       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 00:30:56.784716       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 00:30:56.787283       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 00:30:56.787393       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"03f53f9a-e48e-4cf2-8b6d-92050f35d929", APIVersion:"v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-126000_3b22a3f6-9cc0-402e-84b2-5550285b9b87 became leader
	I0814 00:30:56.787412       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-126000_3b22a3f6-9cc0-402e-84b2-5550285b9b87!
	I0814 00:30:56.888245       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-126000_3b22a3f6-9cc0-402e-84b2-5550285b9b87!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-126000 -n running-upgrade-126000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-126000 -n running-upgrade-126000: exit status 2 (15.666132917s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-126000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-126000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-126000
--- FAIL: TestRunningBinaryUpgrade (603.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-397000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-397000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.863609541s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-397000" primary control-plane node in "kubernetes-upgrade-397000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-397000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:28:13.213297    4276 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:28:13.213463    4276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:28:13.213469    4276 out.go:304] Setting ErrFile to fd 2...
	I0813 17:28:13.213471    4276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:28:13.213619    4276 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:28:13.214907    4276 out.go:298] Setting JSON to false
	I0813 17:28:13.232973    4276 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3457,"bootTime":1723591836,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:28:13.233049    4276 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:28:13.238380    4276 out.go:177] * [kubernetes-upgrade-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:28:13.246391    4276 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:28:13.246488    4276 notify.go:220] Checking for updates...
	I0813 17:28:13.252337    4276 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:28:13.255345    4276 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:28:13.258367    4276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:28:13.261310    4276 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:28:13.264348    4276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:28:13.267765    4276 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:28:13.267837    4276 config.go:182] Loaded profile config "running-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:28:13.267900    4276 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:28:13.270303    4276 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:28:13.281379    4276 start.go:297] selected driver: qemu2
	I0813 17:28:13.281386    4276 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:28:13.281393    4276 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:28:13.283821    4276 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:28:13.285009    4276 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:28:13.287399    4276 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 17:28:13.287414    4276 cni.go:84] Creating CNI manager for ""
	I0813 17:28:13.287422    4276 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0813 17:28:13.287450    4276 start.go:340] cluster config:
	{Name:kubernetes-upgrade-397000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:28:13.291100    4276 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:28:13.298323    4276 out.go:177] * Starting "kubernetes-upgrade-397000" primary control-plane node in "kubernetes-upgrade-397000" cluster
	I0813 17:28:13.302324    4276 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0813 17:28:13.302348    4276 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0813 17:28:13.302362    4276 cache.go:56] Caching tarball of preloaded images
	I0813 17:28:13.302440    4276 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:28:13.302446    4276 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0813 17:28:13.302504    4276 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/kubernetes-upgrade-397000/config.json ...
	I0813 17:28:13.302514    4276 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/kubernetes-upgrade-397000/config.json: {Name:mk7f8966a8c7e8e19bd1def89dce55ae60498d81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:28:13.302751    4276 start.go:360] acquireMachinesLock for kubernetes-upgrade-397000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:28:13.302786    4276 start.go:364] duration metric: took 25.834µs to acquireMachinesLock for "kubernetes-upgrade-397000"
	I0813 17:28:13.302798    4276 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-397000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:28:13.302829    4276 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:28:13.307309    4276 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:28:13.323770    4276 start.go:159] libmachine.API.Create for "kubernetes-upgrade-397000" (driver="qemu2")
	I0813 17:28:13.323799    4276 client.go:168] LocalClient.Create starting
	I0813 17:28:13.323874    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:28:13.323911    4276 main.go:141] libmachine: Decoding PEM data...
	I0813 17:28:13.323919    4276 main.go:141] libmachine: Parsing certificate...
	I0813 17:28:13.323955    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:28:13.323978    4276 main.go:141] libmachine: Decoding PEM data...
	I0813 17:28:13.323985    4276 main.go:141] libmachine: Parsing certificate...
	I0813 17:28:13.324355    4276 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:28:13.473840    4276 main.go:141] libmachine: Creating SSH key...
	I0813 17:28:13.558826    4276 main.go:141] libmachine: Creating Disk image...
	I0813 17:28:13.558835    4276 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:28:13.559056    4276 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2
	I0813 17:28:13.568665    4276 main.go:141] libmachine: STDOUT: 
	I0813 17:28:13.568686    4276 main.go:141] libmachine: STDERR: 
	I0813 17:28:13.568750    4276 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2 +20000M
	I0813 17:28:13.576792    4276 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:28:13.576809    4276 main.go:141] libmachine: STDERR: 
	I0813 17:28:13.576827    4276 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2
	I0813 17:28:13.576831    4276 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:28:13.576844    4276 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:28:13.576877    4276 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:63:2e:18:f2:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2
	I0813 17:28:13.578520    4276 main.go:141] libmachine: STDOUT: 
	I0813 17:28:13.578536    4276 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:28:13.578556    4276 client.go:171] duration metric: took 254.755292ms to LocalClient.Create
	I0813 17:28:15.580735    4276 start.go:128] duration metric: took 2.277898834s to createHost
	I0813 17:28:15.580808    4276 start.go:83] releasing machines lock for "kubernetes-upgrade-397000", held for 2.278045375s
	W0813 17:28:15.580905    4276 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:28:15.598002    4276 out.go:177] * Deleting "kubernetes-upgrade-397000" in qemu2 ...
	W0813 17:28:15.624320    4276 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:28:15.624343    4276 start.go:729] Will try again in 5 seconds ...
	I0813 17:28:20.625351    4276 start.go:360] acquireMachinesLock for kubernetes-upgrade-397000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:28:20.625708    4276 start.go:364] duration metric: took 287.542µs to acquireMachinesLock for "kubernetes-upgrade-397000"
	I0813 17:28:20.625822    4276 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-397000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:28:20.626008    4276 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:28:20.633633    4276 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:28:20.675109    4276 start.go:159] libmachine.API.Create for "kubernetes-upgrade-397000" (driver="qemu2")
	I0813 17:28:20.675163    4276 client.go:168] LocalClient.Create starting
	I0813 17:28:20.675278    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:28:20.675337    4276 main.go:141] libmachine: Decoding PEM data...
	I0813 17:28:20.675351    4276 main.go:141] libmachine: Parsing certificate...
	I0813 17:28:20.675406    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:28:20.675454    4276 main.go:141] libmachine: Decoding PEM data...
	I0813 17:28:20.675465    4276 main.go:141] libmachine: Parsing certificate...
	I0813 17:28:20.675934    4276 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:28:20.833935    4276 main.go:141] libmachine: Creating SSH key...
	I0813 17:28:20.980615    4276 main.go:141] libmachine: Creating Disk image...
	I0813 17:28:20.980624    4276 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:28:20.980902    4276 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2
	I0813 17:28:20.991625    4276 main.go:141] libmachine: STDOUT: 
	I0813 17:28:20.991651    4276 main.go:141] libmachine: STDERR: 
	I0813 17:28:20.991728    4276 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2 +20000M
	I0813 17:28:21.001126    4276 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:28:21.001152    4276 main.go:141] libmachine: STDERR: 
	I0813 17:28:21.001170    4276 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2
	I0813 17:28:21.001175    4276 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:28:21.001195    4276 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:28:21.001227    4276 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6f:fe:03:62:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2
	I0813 17:28:21.003358    4276 main.go:141] libmachine: STDOUT: 
	I0813 17:28:21.003382    4276 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:28:21.003395    4276 client.go:171] duration metric: took 328.2305ms to LocalClient.Create
	I0813 17:28:23.005575    4276 start.go:128] duration metric: took 2.379554541s to createHost
	I0813 17:28:23.005645    4276 start.go:83] releasing machines lock for "kubernetes-upgrade-397000", held for 2.3799535s
	W0813 17:28:23.006148    4276 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-397000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-397000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:28:23.017773    4276 out.go:177] 
	W0813 17:28:23.020809    4276 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:28:23.020833    4276 out.go:239] * 
	* 
	W0813 17:28:23.023531    4276 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:28:23.032731    4276 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-397000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-397000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-397000: (3.259308084s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-397000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-397000 status --format={{.Host}}: exit status 7 (35.598792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-397000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-397000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.171802375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-397000" primary control-plane node in "kubernetes-upgrade-397000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-397000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-397000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:28:26.374097    4318 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:28:26.374246    4318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:28:26.374250    4318 out.go:304] Setting ErrFile to fd 2...
	I0813 17:28:26.374252    4318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:28:26.374401    4318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:28:26.375463    4318 out.go:298] Setting JSON to false
	I0813 17:28:26.391656    4318 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3470,"bootTime":1723591836,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:28:26.391727    4318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:28:26.396391    4318 out.go:177] * [kubernetes-upgrade-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:28:26.399274    4318 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:28:26.399353    4318 notify.go:220] Checking for updates...
	I0813 17:28:26.406289    4318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:28:26.410243    4318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:28:26.413281    4318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:28:26.416320    4318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:28:26.419224    4318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:28:26.422582    4318 config.go:182] Loaded profile config "kubernetes-upgrade-397000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0813 17:28:26.422862    4318 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:28:26.427270    4318 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:28:26.434338    4318 start.go:297] selected driver: qemu2
	I0813 17:28:26.434346    4318 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-397000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:28:26.434423    4318 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:28:26.436787    4318 cni.go:84] Creating CNI manager for ""
	I0813 17:28:26.436807    4318 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:28:26.436834    4318 start.go:340] cluster config:
	{Name:kubernetes-upgrade-397000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-397000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:28:26.440325    4318 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:28:26.448272    4318 out.go:177] * Starting "kubernetes-upgrade-397000" primary control-plane node in "kubernetes-upgrade-397000" cluster
	I0813 17:28:26.452306    4318 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:28:26.452324    4318 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:28:26.452334    4318 cache.go:56] Caching tarball of preloaded images
	I0813 17:28:26.452419    4318 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:28:26.452426    4318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:28:26.452499    4318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/kubernetes-upgrade-397000/config.json ...
	I0813 17:28:26.452922    4318 start.go:360] acquireMachinesLock for kubernetes-upgrade-397000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:28:26.452950    4318 start.go:364] duration metric: took 22.292µs to acquireMachinesLock for "kubernetes-upgrade-397000"
	I0813 17:28:26.452960    4318 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:28:26.452966    4318 fix.go:54] fixHost starting: 
	I0813 17:28:26.453078    4318 fix.go:112] recreateIfNeeded on kubernetes-upgrade-397000: state=Stopped err=<nil>
	W0813 17:28:26.453086    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:28:26.460241    4318 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-397000" ...
	I0813 17:28:26.463272    4318 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:28:26.463315    4318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6f:fe:03:62:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2
	I0813 17:28:26.465297    4318 main.go:141] libmachine: STDOUT: 
	I0813 17:28:26.465323    4318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:28:26.465347    4318 fix.go:56] duration metric: took 12.38125ms for fixHost
	I0813 17:28:26.465352    4318 start.go:83] releasing machines lock for "kubernetes-upgrade-397000", held for 12.39725ms
	W0813 17:28:26.465358    4318 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:28:26.465385    4318 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:28:26.465390    4318 start.go:729] Will try again in 5 seconds ...
	I0813 17:28:31.467413    4318 start.go:360] acquireMachinesLock for kubernetes-upgrade-397000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:28:31.467681    4318 start.go:364] duration metric: took 212.625µs to acquireMachinesLock for "kubernetes-upgrade-397000"
	I0813 17:28:31.467755    4318 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:28:31.467764    4318 fix.go:54] fixHost starting: 
	I0813 17:28:31.468094    4318 fix.go:112] recreateIfNeeded on kubernetes-upgrade-397000: state=Stopped err=<nil>
	W0813 17:28:31.468106    4318 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:28:31.475350    4318 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-397000" ...
	I0813 17:28:31.479368    4318 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:28:31.479498    4318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6f:fe:03:62:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubernetes-upgrade-397000/disk.qcow2
	I0813 17:28:31.483780    4318 main.go:141] libmachine: STDOUT: 
	I0813 17:28:31.483817    4318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:28:31.483852    4318 fix.go:56] duration metric: took 16.089ms for fixHost
	I0813 17:28:31.483862    4318 start.go:83] releasing machines lock for "kubernetes-upgrade-397000", held for 16.170292ms
	W0813 17:28:31.483948    4318 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-397000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-397000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:28:31.492322    4318 out.go:177] 
	W0813 17:28:31.495374    4318 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:28:31.495385    4318 out.go:239] * 
	* 
	W0813 17:28:31.496537    4318 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:28:31.506345    4318 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-397000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-397000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-397000 version --output=json: exit status 1 (50.483083ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-397000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-13 17:28:31.567875 -0700 PDT m=+2551.006962042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-397000 -n kubernetes-upgrade-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-397000 -n kubernetes-upgrade-397000: exit status 7 (32.198167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-397000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-397000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-397000
--- FAIL: TestKubernetesUpgrade (18.50s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.5s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19429
- KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1008258301/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.50s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19429
- KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current717945813/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (571.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1986011609 start -p stopped-upgrade-967000 --memory=2200 --vm-driver=qemu2 
E0813 17:28:46.976230    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1986011609 start -p stopped-upgrade-967000 --memory=2200 --vm-driver=qemu2 : (38.118156834s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1986011609 -p stopped-upgrade-967000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1986011609 -p stopped-upgrade-967000 stop: (12.128265291s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-967000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0813 17:30:36.790749    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:33:39.864781    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:33:46.958263    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-967000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.60950175s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-967000" primary control-plane node in "stopped-upgrade-967000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-967000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:29:23.004685    4376 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:29:23.004859    4376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:29:23.004864    4376 out.go:304] Setting ErrFile to fd 2...
	I0813 17:29:23.004867    4376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:29:23.005034    4376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:29:23.006351    4376 out.go:298] Setting JSON to false
	I0813 17:29:23.026298    4376 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3527,"bootTime":1723591836,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:29:23.026370    4376 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:29:23.031189    4376 out.go:177] * [stopped-upgrade-967000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:29:23.038298    4376 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:29:23.038335    4376 notify.go:220] Checking for updates...
	I0813 17:29:23.044259    4376 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:29:23.047302    4376 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:29:23.050239    4376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:29:23.053260    4376 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:29:23.056281    4376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:29:23.057945    4376 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:29:23.061245    4376 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0813 17:29:23.064297    4376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:29:23.068135    4376 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:29:23.075288    4376 start.go:297] selected driver: qemu2
	I0813 17:29:23.075296    4376 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50478 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0813 17:29:23.075355    4376 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:29:23.077785    4376 cni.go:84] Creating CNI manager for ""
	I0813 17:29:23.077803    4376 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:29:23.077832    4376 start.go:340] cluster config:
	{Name:stopped-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50478 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0813 17:29:23.077890    4376 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:29:23.085240    4376 out.go:177] * Starting "stopped-upgrade-967000" primary control-plane node in "stopped-upgrade-967000" cluster
	I0813 17:29:23.089267    4376 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0813 17:29:23.089283    4376 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0813 17:29:23.089289    4376 cache.go:56] Caching tarball of preloaded images
	I0813 17:29:23.089348    4376 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:29:23.089354    4376 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0813 17:29:23.089407    4376 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/config.json ...
	I0813 17:29:23.089805    4376 start.go:360] acquireMachinesLock for stopped-upgrade-967000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:29:23.089841    4376 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "stopped-upgrade-967000"
	I0813 17:29:23.089852    4376 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:29:23.089857    4376 fix.go:54] fixHost starting: 
	I0813 17:29:23.089976    4376 fix.go:112] recreateIfNeeded on stopped-upgrade-967000: state=Stopped err=<nil>
	W0813 17:29:23.089985    4376 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:29:23.094245    4376 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-967000" ...
	I0813 17:29:23.102239    4376 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:29:23.102312    4376 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50444-:22,hostfwd=tcp::50445-:2376,hostname=stopped-upgrade-967000 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/disk.qcow2
	I0813 17:29:23.146801    4376 main.go:141] libmachine: STDOUT: 
	I0813 17:29:23.146822    4376 main.go:141] libmachine: STDERR: 
	I0813 17:29:23.146828    4376 main.go:141] libmachine: Waiting for VM to start (ssh -p 50444 docker@127.0.0.1)...
	I0813 17:29:43.066718    4376 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/config.json ...
	I0813 17:29:43.067533    4376 machine.go:94] provisionDockerMachine start ...
	I0813 17:29:43.067711    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.068193    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.068206    4376 main.go:141] libmachine: About to run SSH command:
	hostname
	I0813 17:29:43.144083    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0813 17:29:43.144109    4376 buildroot.go:166] provisioning hostname "stopped-upgrade-967000"
	I0813 17:29:43.144196    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.144404    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.144416    4376 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-967000 && echo "stopped-upgrade-967000" | sudo tee /etc/hostname
	I0813 17:29:43.214897    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-967000
	
	I0813 17:29:43.214960    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.215085    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.215095    4376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-967000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-967000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-967000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 17:29:43.276002    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0813 17:29:43.276015    4376 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19429-1127/.minikube CaCertPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19429-1127/.minikube}
	I0813 17:29:43.276024    4376 buildroot.go:174] setting up certificates
	I0813 17:29:43.276028    4376 provision.go:84] configureAuth start
	I0813 17:29:43.276034    4376 provision.go:143] copyHostCerts
	I0813 17:29:43.276134    4376 exec_runner.go:144] found /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.pem, removing ...
	I0813 17:29:43.276141    4376 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.pem
	I0813 17:29:43.276269    4376 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.pem (1082 bytes)
	I0813 17:29:43.276473    4376 exec_runner.go:144] found /Users/jenkins/minikube-integration/19429-1127/.minikube/cert.pem, removing ...
	I0813 17:29:43.276480    4376 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19429-1127/.minikube/cert.pem
	I0813 17:29:43.276533    4376 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19429-1127/.minikube/cert.pem (1123 bytes)
	I0813 17:29:43.276650    4376 exec_runner.go:144] found /Users/jenkins/minikube-integration/19429-1127/.minikube/key.pem, removing ...
	I0813 17:29:43.276653    4376 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19429-1127/.minikube/key.pem
	I0813 17:29:43.276701    4376 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19429-1127/.minikube/key.pem (1675 bytes)
	I0813 17:29:43.276783    4376 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-967000 san=[127.0.0.1 localhost minikube stopped-upgrade-967000]
	I0813 17:29:43.321743    4376 provision.go:177] copyRemoteCerts
	I0813 17:29:43.321771    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 17:29:43.321778    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	I0813 17:29:43.350949    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 17:29:43.357927    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0813 17:29:43.365107    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 17:29:43.371776    4376 provision.go:87] duration metric: took 95.737583ms to configureAuth
	I0813 17:29:43.371784    4376 buildroot.go:189] setting minikube options for container-runtime
	I0813 17:29:43.371878    4376 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:29:43.371915    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.372004    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.372009    4376 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0813 17:29:43.430095    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0813 17:29:43.430105    4376 buildroot.go:70] root file system type: tmpfs
	I0813 17:29:43.430180    4376 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0813 17:29:43.430224    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.430355    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.430391    4376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0813 17:29:43.491524    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0813 17:29:43.491572    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.491681    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.491689    4376 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0813 17:29:43.854219    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0813 17:29:43.854232    4376 machine.go:97] duration metric: took 786.698792ms to provisionDockerMachine
	I0813 17:29:43.854239    4376 start.go:293] postStartSetup for "stopped-upgrade-967000" (driver="qemu2")
	I0813 17:29:43.854246    4376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 17:29:43.854313    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 17:29:43.854323    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	I0813 17:29:43.885571    4376 ssh_runner.go:195] Run: cat /etc/os-release
	I0813 17:29:43.886904    4376 info.go:137] Remote host: Buildroot 2021.02.12
	I0813 17:29:43.886912    4376 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19429-1127/.minikube/addons for local assets ...
	I0813 17:29:43.887011    4376 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19429-1127/.minikube/files for local assets ...
	I0813 17:29:43.887128    4376 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem -> 16352.pem in /etc/ssl/certs
	I0813 17:29:43.887281    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0813 17:29:43.890059    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem --> /etc/ssl/certs/16352.pem (1708 bytes)
	I0813 17:29:43.897242    4376 start.go:296] duration metric: took 42.997833ms for postStartSetup
	I0813 17:29:43.897255    4376 fix.go:56] duration metric: took 20.807701042s for fixHost
	I0813 17:29:43.897291    4376 main.go:141] libmachine: Using SSH client type: native
	I0813 17:29:43.897402    4376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f05a0] 0x1047f2e00 <nil>  [] 0s} localhost 50444 <nil> <nil>}
	I0813 17:29:43.897410    4376 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0813 17:29:43.953506    4376 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723595384.036107046
	
	I0813 17:29:43.953514    4376 fix.go:216] guest clock: 1723595384.036107046
	I0813 17:29:43.953518    4376 fix.go:229] Guest: 2024-08-13 17:29:44.036107046 -0700 PDT Remote: 2024-08-13 17:29:43.897257 -0700 PDT m=+20.923896168 (delta=138.850046ms)
	I0813 17:29:43.953534    4376 fix.go:200] guest clock delta is within tolerance: 138.850046ms
	I0813 17:29:43.953539    4376 start.go:83] releasing machines lock for "stopped-upgrade-967000", held for 20.863995084s
	I0813 17:29:43.953605    4376 ssh_runner.go:195] Run: cat /version.json
	I0813 17:29:43.953616    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	I0813 17:29:43.953636    4376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0813 17:29:43.953655    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	W0813 17:29:43.954195    4376 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50444: connect: connection refused
	I0813 17:29:43.954219    4376 retry.go:31] will retry after 320.897532ms: dial tcp [::1]:50444: connect: connection refused
	W0813 17:29:44.333113    4376 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0813 17:29:44.333284    4376 ssh_runner.go:195] Run: systemctl --version
	I0813 17:29:44.336765    4376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0813 17:29:44.340263    4376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0813 17:29:44.340315    4376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0813 17:29:44.345428    4376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0813 17:29:44.354282    4376 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0813 17:29:44.354298    4376 start.go:495] detecting cgroup driver to use...
	I0813 17:29:44.354426    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 17:29:44.366837    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0813 17:29:44.370798    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0813 17:29:44.375605    4376 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0813 17:29:44.375658    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0813 17:29:44.379710    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0813 17:29:44.383608    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0813 17:29:44.388638    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0813 17:29:44.391645    4376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0813 17:29:44.394420    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0813 17:29:44.397763    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0813 17:29:44.401122    4376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0813 17:29:44.404026    4376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 17:29:44.406425    4376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 17:29:44.409372    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:44.490857    4376 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0813 17:29:44.501468    4376 start.go:495] detecting cgroup driver to use...
	I0813 17:29:44.501537    4376 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0813 17:29:44.507884    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0813 17:29:44.512216    4376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0813 17:29:44.517684    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0813 17:29:44.522762    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0813 17:29:44.527499    4376 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0813 17:29:44.585665    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0813 17:29:44.591649    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 17:29:44.597300    4376 ssh_runner.go:195] Run: which cri-dockerd
	I0813 17:29:44.598620    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0813 17:29:44.601570    4376 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0813 17:29:44.606746    4376 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0813 17:29:44.692971    4376 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0813 17:29:44.765502    4376 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0813 17:29:44.765553    4376 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0813 17:29:44.771060    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:44.841908    4376 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0813 17:29:45.964899    4376 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.122990209s)
	I0813 17:29:45.964966    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0813 17:29:45.969830    4376 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0813 17:29:45.976466    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0813 17:29:45.981035    4376 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0813 17:29:46.058013    4376 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0813 17:29:46.134622    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:46.197821    4376 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0813 17:29:46.203788    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0813 17:29:46.208155    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:46.287531    4376 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0813 17:29:46.324678    4376 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0813 17:29:46.324767    4376 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0813 17:29:46.326976    4376 start.go:563] Will wait 60s for crictl version
	I0813 17:29:46.327027    4376 ssh_runner.go:195] Run: which crictl
	I0813 17:29:46.328854    4376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0813 17:29:46.343021    4376 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0813 17:29:46.343083    4376 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0813 17:29:46.359959    4376 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0813 17:29:46.384164    4376 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0813 17:29:46.384234    4376 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0813 17:29:46.385471    4376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 17:29:46.389776    4376 kubeadm.go:883] updating cluster {Name:stopped-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50478 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0813 17:29:46.389819    4376 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0813 17:29:46.389866    4376 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0813 17:29:46.399994    4376 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0813 17:29:46.400004    4376 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0813 17:29:46.400047    4376 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0813 17:29:46.403041    4376 ssh_runner.go:195] Run: which lz4
	I0813 17:29:46.404415    4376 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0813 17:29:46.405584    4376 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0813 17:29:46.405594    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0813 17:29:47.311636    4376 docker.go:649] duration metric: took 907.261792ms to copy over tarball
	I0813 17:29:47.311687    4376 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 17:29:48.475708    4376 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.164026208s)
	I0813 17:29:48.475720    4376 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0813 17:29:48.491116    4376 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0813 17:29:48.494155    4376 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0813 17:29:48.499541    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:48.577609    4376 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0813 17:29:50.165507    4376 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.587904042s)
	I0813 17:29:50.165611    4376 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0813 17:29:50.180203    4376 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0813 17:29:50.180211    4376 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0813 17:29:50.180217    4376 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0813 17:29:50.184497    4376 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:29:50.186486    4376 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:29:50.188436    4376 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:29:50.188559    4376 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:29:50.190312    4376 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:29:50.190353    4376 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:29:50.191796    4376 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:29:50.191800    4376 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:29:50.193097    4376 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:29:50.193211    4376 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:29:50.194343    4376 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:29:50.194354    4376 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0813 17:29:50.195725    4376 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:29:50.195859    4376 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:29:50.197098    4376 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0813 17:29:50.197608    4376 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:29:50.636414    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:29:50.644153    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:29:50.652131    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:29:50.654596    4376 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0813 17:29:50.654607    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:29:50.654621    4376 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:29:50.654643    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0813 17:29:50.664044    4376 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0813 17:29:50.664070    4376 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:29:50.664125    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0813 17:29:50.676840    4376 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0813 17:29:50.676866    4376 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:29:50.676917    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0813 17:29:50.687317    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0813 17:29:50.687337    4376 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0813 17:29:50.687351    4376 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:29:50.687360    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0813 17:29:50.687394    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0813 17:29:50.691028    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0813 17:29:50.697058    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0813 17:29:50.698636    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0813 17:29:50.707231    4376 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0813 17:29:50.707250    4376 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0813 17:29:50.707297    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0813 17:29:50.713366    4376 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0813 17:29:50.713474    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:29:50.714629    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0813 17:29:50.718117    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0813 17:29:50.719625    4376 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0813 17:29:50.743995    4376 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0813 17:29:50.744014    4376 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0813 17:29:50.744037    4376 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0813 17:29:50.744055    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0813 17:29:50.744050    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0813 17:29:50.743996    4376 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0813 17:29:50.744090    4376 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:29:50.744110    4376 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0813 17:29:50.762379    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0813 17:29:50.762383    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0813 17:29:50.762489    4376 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0813 17:29:50.764095    4376 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0813 17:29:50.764108    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0813 17:29:50.773841    4376 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0813 17:29:50.773853    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0813 17:29:50.810589    4376 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0813 17:29:50.810692    4376 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:29:50.828342    4376 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0813 17:29:50.828365    4376 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0813 17:29:50.828371    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0813 17:29:50.830107    4376 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0813 17:29:50.830125    4376 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:29:50.830180    4376 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:29:50.873695    4376 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0813 17:29:50.873718    4376 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0813 17:29:50.873828    4376 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0813 17:29:50.875203    4376 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0813 17:29:50.875214    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0813 17:29:50.901990    4376 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0813 17:29:50.902003    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0813 17:29:51.138199    4376 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0813 17:29:51.138236    4376 cache_images.go:92] duration metric: took 958.026583ms to LoadCachedImages
	W0813 17:29:51.138276    4376 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0813 17:29:51.138283    4376 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0813 17:29:51.138336    4376 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-967000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0813 17:29:51.138402    4376 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0813 17:29:51.152134    4376 cni.go:84] Creating CNI manager for ""
	I0813 17:29:51.152148    4376 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:29:51.152152    4376 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0813 17:29:51.152160    4376 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-967000 NodeName:stopped-upgrade-967000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0813 17:29:51.152231    4376 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-967000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 17:29:51.152488    4376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0813 17:29:51.155512    4376 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 17:29:51.155548    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 17:29:51.157918    4376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0813 17:29:51.162417    4376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 17:29:51.167403    4376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0813 17:29:51.172239    4376 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0813 17:29:51.173405    4376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 17:29:51.177284    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:29:51.252357    4376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 17:29:51.260039    4376 certs.go:68] Setting up /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000 for IP: 10.0.2.15
	I0813 17:29:51.260048    4376 certs.go:194] generating shared ca certs ...
	I0813 17:29:51.260056    4376 certs.go:226] acquiring lock for ca certs: {Name:mk1c25d4292e2fe754770039b132c434f4539a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:29:51.260216    4376 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.key
	I0813 17:29:51.260267    4376 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/proxy-client-ca.key
	I0813 17:29:51.260273    4376 certs.go:256] generating profile certs ...
	I0813 17:29:51.260365    4376 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/client.key
	I0813 17:29:51.260384    4376 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key.0ca3edb1
	I0813 17:29:51.260396    4376 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt.0ca3edb1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0813 17:29:51.317086    4376 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt.0ca3edb1 ...
	I0813 17:29:51.317112    4376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt.0ca3edb1: {Name:mk47dbff3f8e01159079760cbad8dab7726b13b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:29:51.317649    4376 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key.0ca3edb1 ...
	I0813 17:29:51.317655    4376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key.0ca3edb1: {Name:mk20867504880706023e8d83a4e94a08ecbe57fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:29:51.317810    4376 certs.go:381] copying /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt.0ca3edb1 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt
	I0813 17:29:51.317945    4376 certs.go:385] copying /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key.0ca3edb1 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key
	I0813 17:29:51.318107    4376 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/proxy-client.key
	I0813 17:29:51.318232    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/1635.pem (1338 bytes)
	W0813 17:29:51.318263    4376 certs.go:480] ignoring /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/1635_empty.pem, impossibly tiny 0 bytes
	I0813 17:29:51.318269    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 17:29:51.318299    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem (1082 bytes)
	I0813 17:29:51.318323    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem (1123 bytes)
	I0813 17:29:51.318346    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/key.pem (1675 bytes)
	I0813 17:29:51.318396    4376 certs.go:484] found cert: /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem (1708 bytes)
	I0813 17:29:51.318735    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 17:29:51.325855    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 17:29:51.332843    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 17:29:51.340542    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 17:29:51.347914    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0813 17:29:51.354704    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 17:29:51.361744    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 17:29:51.369143    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 17:29:51.376594    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 17:29:51.383353    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/1635.pem --> /usr/share/ca-certificates/1635.pem (1338 bytes)
	I0813 17:29:51.390024    4376 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/ssl/certs/16352.pem --> /usr/share/ca-certificates/16352.pem (1708 bytes)
	I0813 17:29:51.397167    4376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 17:29:51.402417    4376 ssh_runner.go:195] Run: openssl version
	I0813 17:29:51.404437    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1635.pem && ln -fs /usr/share/ca-certificates/1635.pem /etc/ssl/certs/1635.pem"
	I0813 17:29:51.407590    4376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1635.pem
	I0813 17:29:51.408976    4376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:53 /usr/share/ca-certificates/1635.pem
	I0813 17:29:51.408997    4376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1635.pem
	I0813 17:29:51.410782    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1635.pem /etc/ssl/certs/51391683.0"
	I0813 17:29:51.413802    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16352.pem && ln -fs /usr/share/ca-certificates/16352.pem /etc/ssl/certs/16352.pem"
	I0813 17:29:51.416980    4376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16352.pem
	I0813 17:29:51.418328    4376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:53 /usr/share/ca-certificates/16352.pem
	I0813 17:29:51.418345    4376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16352.pem
	I0813 17:29:51.420076    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16352.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 17:29:51.422797    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 17:29:51.425987    4376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 17:29:51.427373    4376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I0813 17:29:51.427395    4376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 17:29:51.428960    4376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 17:29:51.431924    4376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0813 17:29:51.433336    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0813 17:29:51.435166    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0813 17:29:51.436880    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0813 17:29:51.438843    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0813 17:29:51.440682    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0813 17:29:51.442602    4376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0813 17:29:51.444384    4376 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50478 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0813 17:29:51.444445    4376 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0813 17:29:51.455751    4376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 17:29:51.458764    4376 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0813 17:29:51.458770    4376 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0813 17:29:51.458790    4376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0813 17:29:51.462515    4376 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 17:29:51.462825    4376 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-967000" does not appear in /Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:29:51.462926    4376 kubeconfig.go:62] /Users/jenkins/minikube-integration/19429-1127/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-967000" cluster setting kubeconfig missing "stopped-upgrade-967000" context setting]
	I0813 17:29:51.463133    4376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/kubeconfig: {Name:mk4f6a628d9f9f6550ed229faba2a879ed685a75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:29:51.463615    4376 kapi.go:59] client config for stopped-upgrade-967000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/client.key", CAFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105da7e30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 17:29:51.463975    4376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 17:29:51.466668    4376 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-967000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0813 17:29:51.466672    4376 kubeadm.go:1160] stopping kube-system containers ...
	I0813 17:29:51.466709    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0813 17:29:51.477357    4376 docker.go:483] Stopping containers: [39b1c47004b9 f104bd895320 a3733ebf7dbd 19258fc6df7f 7a9b4be4a825 288d1ff2b9f9 9f18fcade693 84ea75f51f17]
	I0813 17:29:51.477429    4376 ssh_runner.go:195] Run: docker stop 39b1c47004b9 f104bd895320 a3733ebf7dbd 19258fc6df7f 7a9b4be4a825 288d1ff2b9f9 9f18fcade693 84ea75f51f17
	I0813 17:29:51.488295    4376 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0813 17:29:51.493603    4376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 17:29:51.496606    4376 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 17:29:51.496616    4376 kubeadm.go:157] found existing configuration files:
	
	I0813 17:29:51.496641    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/admin.conf
	I0813 17:29:51.499123    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0813 17:29:51.499142    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0813 17:29:51.501842    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/kubelet.conf
	I0813 17:29:51.504782    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0813 17:29:51.504802    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0813 17:29:51.507394    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/controller-manager.conf
	I0813 17:29:51.509958    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0813 17:29:51.509978    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 17:29:51.512817    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/scheduler.conf
	I0813 17:29:51.515219    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0813 17:29:51.515241    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 17:29:51.517949    4376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 17:29:51.520845    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:29:51.545038    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:29:52.187007    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:29:52.318199    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:29:52.341932    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 17:29:52.377033    4376 api_server.go:52] waiting for apiserver process to appear ...
	I0813 17:29:52.377106    4376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:29:52.879174    4376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:29:53.379144    4376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:29:53.383303    4376 api_server.go:72] duration metric: took 1.006286667s to wait for apiserver process to appear ...
	I0813 17:29:53.383313    4376 api_server.go:88] waiting for apiserver healthz status ...
	I0813 17:29:53.383321    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:29:58.385373    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:29:58.385417    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:03.385975    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:03.386022    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:08.386437    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:08.386476    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:13.387124    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:13.387190    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:18.388064    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:18.388143    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:23.389315    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:23.389362    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:28.390697    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:28.390750    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:33.392735    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:33.392803    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:38.395100    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:38.395121    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:43.396631    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:43.396650    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:48.397772    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:48.397844    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:30:53.400249    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:30:53.400359    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:30:53.412061    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:30:53.412139    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:30:53.422764    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:30:53.422842    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:30:53.434078    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:30:53.434160    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:30:53.444432    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:30:53.444514    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:30:53.455336    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:30:53.455415    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:30:53.465791    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:30:53.465865    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:30:53.476224    4376 logs.go:276] 0 containers: []
	W0813 17:30:53.476236    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:30:53.476302    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:30:53.487103    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:30:53.487122    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:30:53.487129    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:30:53.505243    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:30:53.505254    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:30:53.532410    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:30:53.532423    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:30:53.547220    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:30:53.547231    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:30:53.559619    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:30:53.559631    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:30:53.571088    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:30:53.571100    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:30:53.616224    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:30:53.616242    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:30:53.628212    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:30:53.628225    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:30:53.640223    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:30:53.640234    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:30:53.660282    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:30:53.660294    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:30:53.673126    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:30:53.673137    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:30:53.686018    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:30:53.686031    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:30:53.701676    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:30:53.701688    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:30:53.713389    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:30:53.713398    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:30:53.752901    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:30:53.752912    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:30:53.757383    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:30:53.757390    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:30:53.866007    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:30:53.866020    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:30:56.382602    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:01.384443    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:01.384658    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:01.407776    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:01.407906    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:01.422844    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:01.422948    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:01.435679    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:01.435770    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:01.448097    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:01.448171    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:01.458989    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:01.459065    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:01.469509    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:01.469575    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:01.479543    4376 logs.go:276] 0 containers: []
	W0813 17:31:01.479558    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:01.479630    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:01.490394    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:01.490414    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:01.490420    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:01.527543    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:01.527554    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:01.568023    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:01.568034    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:01.587351    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:01.587361    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:01.598505    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:01.598519    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:01.610914    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:01.610925    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:01.614993    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:01.615000    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:01.627192    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:01.627206    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:01.639167    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:01.639187    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:01.656683    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:01.656694    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:01.669807    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:01.669819    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:01.682804    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:01.682817    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:01.697226    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:01.697235    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:01.711390    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:01.711401    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:01.722941    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:01.722951    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:01.735711    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:01.735722    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:01.761125    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:01.761135    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:04.302562    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:09.304848    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:09.305030    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:09.325625    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:09.325739    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:09.339578    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:09.339660    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:09.351043    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:09.351107    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:09.361840    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:09.361937    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:09.372228    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:09.372304    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:09.382530    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:09.382598    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:09.392050    4376 logs.go:276] 0 containers: []
	W0813 17:31:09.392061    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:09.392127    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:09.402563    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:09.402583    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:09.402590    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:09.414077    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:09.414089    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:09.425458    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:09.425469    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:09.465812    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:09.465821    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:09.482739    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:09.482749    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:09.496112    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:09.496123    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:09.514994    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:09.515004    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:09.528981    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:09.528993    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:09.541068    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:09.541079    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:09.552771    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:09.552781    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:09.565173    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:09.565184    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:09.604266    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:09.604278    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:09.615964    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:09.615977    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:09.641170    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:09.641183    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:09.645911    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:09.645918    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:09.681407    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:09.681419    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:09.692930    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:09.692942    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:12.207632    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:17.209863    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:17.210016    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:17.223584    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:17.223674    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:17.235265    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:17.235342    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:17.246195    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:17.246278    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:17.256891    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:17.256965    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:17.270417    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:17.270488    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:17.280712    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:17.280796    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:17.290557    4376 logs.go:276] 0 containers: []
	W0813 17:31:17.290570    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:17.290647    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:17.301377    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:17.301396    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:17.301402    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:17.339950    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:17.339959    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:17.374521    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:17.374532    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:17.389308    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:17.389318    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:17.401501    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:17.401512    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:17.413910    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:17.413920    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:17.425510    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:17.425521    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:17.437279    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:17.437289    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:17.449715    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:17.449724    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:17.467286    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:17.467296    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:17.481047    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:17.481058    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:17.498277    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:17.498287    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:17.509310    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:17.509321    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:17.521375    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:17.521385    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:17.525922    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:17.525929    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:17.566220    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:17.566235    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:17.578127    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:17.578139    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:20.106635    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:25.107288    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:25.107483    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:25.127538    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:25.127637    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:25.142319    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:25.142392    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:25.156066    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:25.156143    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:25.169757    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:25.169828    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:25.180829    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:25.180908    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:25.191198    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:25.191265    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:25.201548    4376 logs.go:276] 0 containers: []
	W0813 17:31:25.201559    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:25.201627    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:25.212304    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:25.212323    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:25.212328    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:25.226170    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:25.226180    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:25.237561    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:25.237573    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:25.255151    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:25.255162    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:25.267460    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:25.267471    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:25.292784    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:25.292793    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:25.305054    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:25.305065    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:25.318938    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:25.318949    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:25.330912    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:25.330922    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:25.346095    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:25.346105    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:25.384120    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:25.384130    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:25.398403    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:25.398414    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:25.435156    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:25.435166    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:25.449925    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:25.449937    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:25.462763    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:25.462773    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:25.499991    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:25.499999    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:25.503863    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:25.503870    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:28.019638    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:33.021829    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:33.021930    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:33.036737    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:33.036819    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:33.047632    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:33.047712    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:33.058312    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:33.058377    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:33.068939    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:33.069015    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:33.079369    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:33.079450    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:33.090247    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:33.090323    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:33.105180    4376 logs.go:276] 0 containers: []
	W0813 17:31:33.105192    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:33.105258    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:33.115902    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:33.115923    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:33.115929    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:33.153430    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:33.153441    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:33.164657    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:33.164669    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:33.176079    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:33.176091    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:33.201373    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:33.201382    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:33.206054    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:33.206061    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:33.220140    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:33.220151    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:33.236435    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:33.236446    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:33.275283    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:33.275295    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:33.289779    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:33.289789    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:33.301143    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:33.301156    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:33.313923    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:33.313934    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:33.325592    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:33.325601    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:33.339397    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:33.339409    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:33.377546    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:33.377557    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:33.389092    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:33.389104    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:33.401177    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:33.401188    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:35.920451    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:40.922723    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:40.922919    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:40.942850    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:40.942947    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:40.956495    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:40.956579    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:40.967651    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:40.967727    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:40.978838    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:40.978918    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:40.993724    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:40.993801    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:41.006774    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:41.006851    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:41.017164    4376 logs.go:276] 0 containers: []
	W0813 17:31:41.017174    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:41.017230    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:41.027209    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:41.027228    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:41.027234    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:41.039277    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:41.039291    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:41.043831    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:41.043838    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:41.058436    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:41.058447    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:41.071167    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:41.071178    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:41.088157    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:41.088167    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:41.099623    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:41.099635    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:41.120713    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:41.120725    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:41.141842    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:41.141852    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:41.165158    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:41.165166    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:41.176776    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:41.176788    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:41.190710    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:41.190722    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:41.202691    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:41.202703    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:41.214255    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:41.214268    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:41.252370    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:41.252379    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:41.292857    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:41.292868    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:41.307156    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:41.307166    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:43.847777    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:48.849946    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:48.850202    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:48.875035    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:48.875167    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:48.892008    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:48.892110    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:48.905112    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:48.905195    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:48.917231    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:48.917314    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:48.927487    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:48.927567    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:48.938904    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:48.938980    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:48.949724    4376 logs.go:276] 0 containers: []
	W0813 17:31:48.949737    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:48.949800    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:48.960832    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:48.960849    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:48.960855    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:48.972666    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:48.972679    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:48.987410    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:48.987421    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:49.030081    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:49.030096    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:49.048175    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:49.048188    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:49.059956    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:49.059968    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:49.071394    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:49.071404    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:49.095069    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:49.095080    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:49.099555    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:49.099563    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:49.135330    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:49.135341    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:49.150665    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:49.150674    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:49.163081    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:49.163096    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:49.174790    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:49.174801    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:49.193291    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:49.193303    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:49.205410    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:49.205420    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:49.218009    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:49.218019    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:49.237067    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:49.237078    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:51.777832    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:31:56.780019    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:31:56.780137    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:31:56.793295    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:31:56.793389    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:31:56.804291    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:31:56.804368    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:31:56.819052    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:31:56.819130    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:31:56.830134    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:31:56.830215    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:31:56.841533    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:31:56.841603    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:31:56.852843    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:31:56.852931    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:31:56.863791    4376 logs.go:276] 0 containers: []
	W0813 17:31:56.863803    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:31:56.863864    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:31:56.874352    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:31:56.874370    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:31:56.874376    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:31:56.878605    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:31:56.878611    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:31:56.916818    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:31:56.916831    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:31:56.935387    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:31:56.935398    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:31:56.948198    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:31:56.948210    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:31:56.960162    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:31:56.960174    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:31:57.000330    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:31:57.000343    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:31:57.014973    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:31:57.014985    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:31:57.029419    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:31:57.029432    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:31:57.053572    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:31:57.053587    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:31:57.091919    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:31:57.091931    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:31:57.106397    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:31:57.106408    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:31:57.117427    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:31:57.117439    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:31:57.129564    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:31:57.129574    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:31:57.141243    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:31:57.141254    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:31:57.153238    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:31:57.153249    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:31:57.165322    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:31:57.165333    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:31:59.679512    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:04.681752    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:04.682078    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:04.719205    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:04.719357    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:04.736625    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:04.736725    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:04.749612    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:04.749702    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:04.761013    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:04.761095    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:04.771113    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:04.771189    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:04.781266    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:04.781350    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:04.791372    4376 logs.go:276] 0 containers: []
	W0813 17:32:04.791387    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:04.791454    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:04.802313    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:04.802333    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:04.802338    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:04.816784    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:04.816795    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:04.830677    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:04.830687    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:04.855542    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:04.855549    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:04.874507    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:04.874518    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:04.886699    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:04.886710    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:04.898913    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:04.898926    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:04.903115    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:04.903123    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:04.940511    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:04.940525    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:04.962943    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:04.962953    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:04.974187    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:04.974197    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:04.986507    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:04.986519    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:05.023617    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:05.023625    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:05.063044    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:05.063056    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:05.075236    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:05.075247    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:05.086956    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:05.086967    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:05.103724    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:05.103737    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:07.617430    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:12.619554    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:12.619718    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:12.641171    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:12.641263    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:12.655208    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:12.655295    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:12.666113    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:12.666188    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:12.680792    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:12.680869    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:12.693298    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:12.693375    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:12.705006    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:12.705085    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:12.715438    4376 logs.go:276] 0 containers: []
	W0813 17:32:12.715452    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:12.715516    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:12.725994    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:12.726015    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:12.726021    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:12.742604    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:12.742615    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:12.754925    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:12.754935    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:12.767224    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:12.767235    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:12.792156    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:12.792167    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:12.828389    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:12.828399    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:12.839429    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:12.839440    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:12.850961    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:12.850972    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:12.863043    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:12.863053    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:12.901484    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:12.901495    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:12.937225    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:12.937236    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:12.951354    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:12.951364    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:12.963534    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:12.963546    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:12.984387    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:12.984400    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:12.998118    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:12.998127    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:13.002453    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:13.002460    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:13.016240    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:13.016251    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:15.532273    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:20.534458    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:20.534624    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:20.554003    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:20.554117    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:20.568573    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:20.568660    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:20.582351    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:20.582427    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:20.593442    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:20.593522    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:20.603961    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:20.604033    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:20.614460    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:20.614541    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:20.625384    4376 logs.go:276] 0 containers: []
	W0813 17:32:20.625399    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:20.625460    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:20.635732    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:20.635751    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:20.635757    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:20.653187    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:20.653198    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:20.688535    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:20.688548    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:20.703856    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:20.703868    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:20.715613    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:20.715626    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:20.728149    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:20.728158    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:20.742973    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:20.742983    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:20.761655    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:20.761665    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:20.773005    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:20.773015    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:20.797446    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:20.797455    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:20.835994    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:20.836004    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:20.874444    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:20.874458    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:20.890794    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:20.890805    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:20.906118    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:20.906134    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:20.910977    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:20.910984    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:20.925302    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:20.925313    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:20.937291    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:20.937302    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:23.451059    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:28.452317    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:28.452567    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:28.477950    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:28.478046    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:28.490850    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:28.490932    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:28.502404    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:28.502482    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:28.512963    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:28.513044    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:28.526317    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:28.526391    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:28.537366    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:28.537440    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:28.547511    4376 logs.go:276] 0 containers: []
	W0813 17:32:28.547528    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:28.547596    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:28.558319    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:28.558337    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:28.558344    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:28.569135    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:28.569149    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:28.573516    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:28.573522    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:28.611154    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:28.611165    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:28.624006    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:28.624016    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:28.639171    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:28.639181    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:28.657112    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:28.657122    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:28.681735    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:28.681747    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:28.718215    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:28.718227    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:28.732943    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:28.732955    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:28.752989    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:28.753000    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:28.765160    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:28.765173    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:28.778846    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:28.778858    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:28.790796    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:28.790807    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:28.830105    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:28.830121    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:28.844959    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:28.844970    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:28.857000    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:28.857010    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:31.368582    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:36.368444    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:36.368719    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:36.399095    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:36.399241    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:36.425080    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:36.425182    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:36.445332    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:36.445412    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:36.459100    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:36.459176    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:36.470299    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:36.470373    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:36.481185    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:36.481264    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:36.500146    4376 logs.go:276] 0 containers: []
	W0813 17:32:36.500159    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:36.500220    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:36.510497    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:36.510513    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:36.510518    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:36.524849    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:36.524860    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:36.535760    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:36.535772    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:36.548495    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:36.548504    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:36.562527    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:36.562539    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:36.601760    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:36.601770    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:36.636820    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:36.636830    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:36.651211    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:36.651223    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:36.665799    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:36.665814    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:36.684289    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:36.684301    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:36.695585    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:36.695595    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:36.705053    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:36.705060    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:36.749790    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:36.749801    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:36.773907    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:36.773916    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:36.788554    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:36.788566    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:36.802565    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:36.802577    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:36.814656    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:36.814667    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:39.328000    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:44.328672    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:44.328768    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:44.340328    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:44.340406    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:44.350742    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:44.350818    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:44.361555    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:44.361631    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:44.371782    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:44.371860    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:44.382428    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:44.382506    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:44.392988    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:44.393060    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:44.403196    4376 logs.go:276] 0 containers: []
	W0813 17:32:44.403207    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:44.403266    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:44.414033    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:44.414052    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:44.414067    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:44.426025    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:44.426036    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:44.449273    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:44.449284    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:44.465588    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:44.465597    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:44.478180    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:44.478190    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:44.496107    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:44.496117    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:44.514334    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:44.514346    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:44.527946    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:44.527957    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:44.540024    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:44.540036    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:44.551932    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:44.551943    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:44.591422    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:44.591430    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:44.595573    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:44.595580    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:44.636505    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:44.636515    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:44.674081    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:44.674092    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:44.685807    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:44.685818    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:44.698135    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:44.698146    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:44.709959    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:44.709968    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:47.223480    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:32:52.224996    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:32:52.225346    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:32:52.268461    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:32:52.268612    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:32:52.288701    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:32:52.288814    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:32:52.303342    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:32:52.303430    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:32:52.316177    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:32:52.316247    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:32:52.328392    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:32:52.328461    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:32:52.339243    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:32:52.339313    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:32:52.349928    4376 logs.go:276] 0 containers: []
	W0813 17:32:52.349940    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:32:52.350009    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:32:52.360824    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:32:52.360845    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:32:52.360853    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:32:52.401454    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:32:52.401469    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:32:52.438931    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:32:52.438944    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:32:52.457226    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:32:52.457237    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:32:52.469839    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:32:52.469850    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:32:52.482481    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:32:52.482491    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:32:52.494408    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:32:52.494420    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:32:52.507622    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:32:52.507632    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:32:52.519112    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:32:52.519123    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:32:52.534035    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:32:52.534045    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:32:52.549593    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:32:52.549606    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:32:52.567085    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:32:52.567099    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:32:52.595321    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:32:52.595333    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:32:52.617961    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:32:52.617972    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:32:52.622749    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:32:52.622756    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:32:52.661406    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:32:52.661417    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:32:52.679924    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:32:52.679939    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:32:55.194091    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:00.195947    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:00.196294    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:00.231942    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:00.232097    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:00.251425    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:00.251550    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:00.268802    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:00.268888    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:00.281595    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:00.281673    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:00.292239    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:00.292307    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:00.302891    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:00.302963    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:00.313130    4376 logs.go:276] 0 containers: []
	W0813 17:33:00.313141    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:00.313206    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:00.324285    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:00.324305    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:00.324311    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:00.362020    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:00.362028    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:00.398810    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:00.398821    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:00.437990    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:00.438005    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:00.453439    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:00.453449    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:00.469221    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:00.469231    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:00.481746    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:00.481757    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:00.486124    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:00.486130    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:00.505698    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:00.505709    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:00.525914    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:00.525924    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:00.537743    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:00.537755    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:00.552669    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:00.552680    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:00.564407    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:00.564419    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:00.580134    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:00.580145    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:00.591653    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:00.591665    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:00.603773    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:00.603785    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:00.615027    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:00.615038    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:03.138823    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:08.140788    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:08.141009    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:08.163265    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:08.163374    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:08.176921    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:08.177014    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:08.190070    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:08.190149    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:08.200785    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:08.200862    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:08.211528    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:08.211603    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:08.221876    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:08.221950    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:08.233036    4376 logs.go:276] 0 containers: []
	W0813 17:33:08.233048    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:08.233107    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:08.243707    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:08.243727    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:08.243734    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:08.262458    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:08.262470    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:08.274685    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:08.274695    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:08.299553    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:08.299563    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:08.311615    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:08.311626    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:08.316042    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:08.316048    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:08.357350    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:08.357361    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:08.371914    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:08.371926    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:08.383406    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:08.383417    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:08.394412    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:08.394423    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:08.418273    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:08.418281    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:08.432456    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:08.432467    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:08.444567    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:08.444580    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:08.462385    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:08.462404    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:08.475711    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:08.475722    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:08.488159    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:08.488170    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:08.527960    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:08.527971    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:11.064633    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:16.065063    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:16.065258    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:16.081328    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:16.081422    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:16.099422    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:16.099499    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:16.109914    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:16.109997    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:16.120543    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:16.120622    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:16.131644    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:16.131713    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:16.146390    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:16.146455    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:16.156116    4376 logs.go:276] 0 containers: []
	W0813 17:33:16.156131    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:16.156204    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:16.166637    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:16.166658    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:16.166663    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:16.181866    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:16.181876    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:16.196127    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:16.196139    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:16.208334    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:16.208346    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:16.249847    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:16.249858    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:16.263029    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:16.263040    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:16.275073    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:16.275087    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:16.286937    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:16.286947    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:16.304274    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:16.304284    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:16.316895    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:16.316905    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:16.339454    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:16.339467    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:16.343994    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:16.344000    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:16.355510    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:16.355519    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:16.393847    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:16.393858    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:16.433442    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:16.433452    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:16.451774    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:16.451786    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:16.463936    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:16.463949    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:18.977811    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:23.979940    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:23.980119    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:23.998377    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:23.998485    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:24.012045    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:24.012131    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:24.023799    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:24.023881    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:24.033990    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:24.034067    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:24.044466    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:24.044569    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:24.055718    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:24.055801    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:24.070013    4376 logs.go:276] 0 containers: []
	W0813 17:33:24.070024    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:24.070091    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:24.080556    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:24.080575    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:24.080580    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:24.118287    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:24.118296    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:24.122167    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:24.122174    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:24.134086    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:24.134097    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:24.151543    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:24.151554    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:24.186961    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:24.186973    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:24.200858    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:24.200868    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:24.214936    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:24.214948    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:24.226708    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:24.226719    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:24.237737    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:24.237749    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:24.259097    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:24.259107    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:24.298168    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:24.298182    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:24.317780    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:24.317790    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:24.330776    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:24.330789    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:24.342848    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:24.342859    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:24.354543    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:24.354555    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:24.367148    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:24.367158    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:26.891745    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:31.893885    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:31.894035    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:31.905941    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:31.906032    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:31.916596    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:31.916675    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:31.927301    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:31.927378    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:31.937782    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:31.937859    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:31.948780    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:31.948860    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:31.959694    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:31.959758    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:31.970452    4376 logs.go:276] 0 containers: []
	W0813 17:33:31.970464    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:31.970527    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:31.981676    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:31.981692    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:31.981699    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:31.986236    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:31.986242    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:32.022836    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:32.022846    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:32.035093    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:32.035102    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:32.053353    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:32.053362    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:32.065015    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:32.065026    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:32.103954    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:32.103964    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:32.142103    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:32.142115    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:32.155982    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:32.155991    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:32.167484    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:32.167494    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:32.191092    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:32.191101    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:32.203087    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:32.203100    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:32.216980    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:32.216991    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:32.231867    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:32.231878    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:32.243958    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:32.243970    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:32.258927    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:32.258940    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:32.270461    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:32.270472    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:34.784271    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:39.786366    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:39.786499    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:39.810495    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:39.810615    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:39.837978    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:39.838052    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:39.864872    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:39.864943    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:39.875824    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:39.875900    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:39.889882    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:39.889961    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:39.901239    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:39.901325    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:39.911881    4376 logs.go:276] 0 containers: []
	W0813 17:33:39.911893    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:39.911959    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:39.922576    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:39.922594    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:39.922601    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:39.960406    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:39.960417    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:39.974610    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:39.974621    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:39.985716    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:39.985730    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:39.997348    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:39.997359    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:40.009434    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:40.009444    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:40.021449    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:40.021460    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:40.032828    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:40.032839    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:40.070048    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:40.070060    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:40.090240    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:40.090251    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:40.102066    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:40.102078    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:40.113953    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:40.113965    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:40.118034    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:40.118041    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:40.139064    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:40.139070    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:40.176235    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:40.176246    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:40.191273    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:40.191284    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:40.208510    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:40.208520    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:42.723128    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:47.725228    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:47.725425    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:33:47.743357    4376 logs.go:276] 2 containers: [ab7b539f1ed1 7a9b4be4a825]
	I0813 17:33:47.743443    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:33:47.754877    4376 logs.go:276] 2 containers: [62b4ffe059f4 a3733ebf7dbd]
	I0813 17:33:47.754949    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:33:47.765168    4376 logs.go:276] 1 containers: [d3eb33aa3701]
	I0813 17:33:47.765241    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:33:47.777019    4376 logs.go:276] 2 containers: [1ca1bc5ca77c 39b1c47004b9]
	I0813 17:33:47.777104    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:33:47.791248    4376 logs.go:276] 1 containers: [59819b8ea9b2]
	I0813 17:33:47.791322    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:33:47.801705    4376 logs.go:276] 2 containers: [6875b9249f02 19258fc6df7f]
	I0813 17:33:47.801781    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:33:47.812073    4376 logs.go:276] 0 containers: []
	W0813 17:33:47.812085    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:33:47.812159    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:33:47.825887    4376 logs.go:276] 2 containers: [ece34a15531c 95599c5c8eff]
	I0813 17:33:47.825906    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:33:47.825912    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:33:47.829946    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:33:47.829953    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:33:47.864459    4376 logs.go:123] Gathering logs for kube-apiserver [7a9b4be4a825] ...
	I0813 17:33:47.864469    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a9b4be4a825"
	I0813 17:33:47.903034    4376 logs.go:123] Gathering logs for kube-scheduler [1ca1bc5ca77c] ...
	I0813 17:33:47.903045    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca1bc5ca77c"
	I0813 17:33:47.914677    4376 logs.go:123] Gathering logs for kube-controller-manager [6875b9249f02] ...
	I0813 17:33:47.914688    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6875b9249f02"
	I0813 17:33:47.931925    4376 logs.go:123] Gathering logs for kube-controller-manager [19258fc6df7f] ...
	I0813 17:33:47.931936    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19258fc6df7f"
	I0813 17:33:47.944448    4376 logs.go:123] Gathering logs for kube-apiserver [ab7b539f1ed1] ...
	I0813 17:33:47.944458    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab7b539f1ed1"
	I0813 17:33:47.958946    4376 logs.go:123] Gathering logs for storage-provisioner [95599c5c8eff] ...
	I0813 17:33:47.958956    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95599c5c8eff"
	I0813 17:33:47.969947    4376 logs.go:123] Gathering logs for etcd [a3733ebf7dbd] ...
	I0813 17:33:47.969958    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3733ebf7dbd"
	I0813 17:33:47.985213    4376 logs.go:123] Gathering logs for coredns [d3eb33aa3701] ...
	I0813 17:33:47.985224    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3eb33aa3701"
	I0813 17:33:48.000803    4376 logs.go:123] Gathering logs for kube-scheduler [39b1c47004b9] ...
	I0813 17:33:48.000817    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b1c47004b9"
	I0813 17:33:48.012883    4376 logs.go:123] Gathering logs for storage-provisioner [ece34a15531c] ...
	I0813 17:33:48.012894    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ece34a15531c"
	I0813 17:33:48.024340    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:33:48.024351    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:33:48.036367    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:33:48.036378    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:33:48.074018    4376 logs.go:123] Gathering logs for etcd [62b4ffe059f4] ...
	I0813 17:33:48.074028    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62b4ffe059f4"
	I0813 17:33:48.088588    4376 logs.go:123] Gathering logs for kube-proxy [59819b8ea9b2] ...
	I0813 17:33:48.088600    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59819b8ea9b2"
	I0813 17:33:48.100310    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:33:48.100321    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:33:50.625747    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:33:55.628033    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:33:55.628108    4376 kubeadm.go:597] duration metric: took 4m4.184250458s to restartPrimaryControlPlane
	W0813 17:33:55.628184    4376 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0813 17:33:55.628220    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0813 17:33:56.677315    4376 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.049102042s)
	I0813 17:33:56.677376    4376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0813 17:33:56.682461    4376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 17:33:56.685158    4376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 17:33:56.688088    4376 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 17:33:56.688095    4376 kubeadm.go:157] found existing configuration files:
	
	I0813 17:33:56.688129    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/admin.conf
	I0813 17:33:56.690896    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0813 17:33:56.690926    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0813 17:33:56.693376    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/kubelet.conf
	I0813 17:33:56.696406    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0813 17:33:56.696438    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0813 17:33:56.699678    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/controller-manager.conf
	I0813 17:33:56.702268    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0813 17:33:56.702297    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 17:33:56.704842    4376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/scheduler.conf
	I0813 17:33:56.708031    4376 kubeadm.go:163] "https://control-plane.minikube.internal:50478" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50478 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0813 17:33:56.708061    4376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 17:33:56.710905    4376 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0813 17:33:56.727383    4376 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0813 17:33:56.727413    4376 kubeadm.go:310] [preflight] Running pre-flight checks
	I0813 17:33:56.778312    4376 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 17:33:56.778398    4376 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 17:33:56.778481    4376 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0813 17:33:56.831640    4376 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 17:33:56.839779    4376 out.go:204]   - Generating certificates and keys ...
	I0813 17:33:56.839813    4376 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0813 17:33:56.839843    4376 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0813 17:33:56.839884    4376 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0813 17:33:56.839923    4376 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0813 17:33:56.839969    4376 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0813 17:33:56.839995    4376 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0813 17:33:56.840028    4376 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0813 17:33:56.840065    4376 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0813 17:33:56.840106    4376 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0813 17:33:56.840147    4376 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0813 17:33:56.840168    4376 kubeadm.go:310] [certs] Using the existing "sa" key
	I0813 17:33:56.840197    4376 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 17:33:56.895041    4376 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 17:33:56.990784    4376 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 17:33:57.129741    4376 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 17:33:57.295956    4376 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 17:33:57.324702    4376 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 17:33:57.325067    4376 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 17:33:57.325098    4376 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0813 17:33:57.415738    4376 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 17:33:57.423403    4376 out.go:204]   - Booting up control plane ...
	I0813 17:33:57.423489    4376 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 17:33:57.423536    4376 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 17:33:57.423594    4376 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 17:33:57.423641    4376 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 17:33:57.423770    4376 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0813 17:34:01.922353    4376 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503384 seconds
	I0813 17:34:01.922428    4376 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 17:34:01.925605    4376 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 17:34:02.433038    4376 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0813 17:34:02.433137    4376 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-967000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 17:34:02.936461    4376 kubeadm.go:310] [bootstrap-token] Using token: 3nwyfi.oaah5rc09050qhhe
	I0813 17:34:02.937973    4376 out.go:204]   - Configuring RBAC rules ...
	I0813 17:34:02.938120    4376 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 17:34:02.938410    4376 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 17:34:02.945668    4376 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 17:34:02.946933    4376 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 17:34:02.947848    4376 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 17:34:02.948743    4376 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 17:34:02.951716    4376 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 17:34:03.108341    4376 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0813 17:34:03.340848    4376 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0813 17:34:03.341213    4376 kubeadm.go:310] 
	I0813 17:34:03.341247    4376 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0813 17:34:03.341250    4376 kubeadm.go:310] 
	I0813 17:34:03.341291    4376 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0813 17:34:03.341294    4376 kubeadm.go:310] 
	I0813 17:34:03.341306    4376 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0813 17:34:03.341348    4376 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 17:34:03.341374    4376 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 17:34:03.341377    4376 kubeadm.go:310] 
	I0813 17:34:03.341409    4376 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0813 17:34:03.341413    4376 kubeadm.go:310] 
	I0813 17:34:03.341434    4376 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 17:34:03.341446    4376 kubeadm.go:310] 
	I0813 17:34:03.341473    4376 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0813 17:34:03.341515    4376 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 17:34:03.341561    4376 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 17:34:03.341564    4376 kubeadm.go:310] 
	I0813 17:34:03.341604    4376 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0813 17:34:03.341650    4376 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0813 17:34:03.341654    4376 kubeadm.go:310] 
	I0813 17:34:03.341706    4376 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3nwyfi.oaah5rc09050qhhe \
	I0813 17:34:03.341760    4376 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:94a653d9144e0f51dbf8cb0881c67d995fb93f16972a5a4e4bd9f3c8d4a5aa34 \
	I0813 17:34:03.341774    4376 kubeadm.go:310] 	--control-plane 
	I0813 17:34:03.341777    4376 kubeadm.go:310] 
	I0813 17:34:03.341829    4376 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0813 17:34:03.341837    4376 kubeadm.go:310] 
	I0813 17:34:03.341902    4376 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3nwyfi.oaah5rc09050qhhe \
	I0813 17:34:03.341961    4376 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:94a653d9144e0f51dbf8cb0881c67d995fb93f16972a5a4e4bd9f3c8d4a5aa34 
	I0813 17:34:03.342146    4376 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 17:34:03.342158    4376 cni.go:84] Creating CNI manager for ""
	I0813 17:34:03.342166    4376 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:34:03.346058    4376 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 17:34:03.354229    4376 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0813 17:34:03.357535    4376 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0813 17:34:03.362335    4376 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 17:34:03.362384    4376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 17:34:03.362401    4376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-967000 minikube.k8s.io/updated_at=2024_08_13T17_34_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=stopped-upgrade-967000 minikube.k8s.io/primary=true
	I0813 17:34:03.405361    4376 ops.go:34] apiserver oom_adj: -16
	I0813 17:34:03.405402    4376 kubeadm.go:1113] duration metric: took 43.0635ms to wait for elevateKubeSystemPrivileges
	I0813 17:34:03.405469    4376 kubeadm.go:394] duration metric: took 4m11.97615075s to StartCluster
	I0813 17:34:03.405480    4376 settings.go:142] acquiring lock: {Name:mkaf11e998595d0fbc8bedb0051c4325b4dc127d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:34:03.405567    4376 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:34:03.405973    4376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/kubeconfig: {Name:mk4f6a628d9f9f6550ed229faba2a879ed685a75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:34:03.406160    4376 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:34:03.406166    4376 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0813 17:34:03.406205    4376 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-967000"
	I0813 17:34:03.406217    4376 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-967000"
	W0813 17:34:03.406228    4376 addons.go:243] addon storage-provisioner should already be in state true
	I0813 17:34:03.406237    4376 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:34:03.406238    4376 host.go:66] Checking if "stopped-upgrade-967000" exists ...
	I0813 17:34:03.406267    4376 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-967000"
	I0813 17:34:03.406279    4376 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-967000"
	I0813 17:34:03.407376    4376 kapi.go:59] client config for stopped-upgrade-967000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/stopped-upgrade-967000/client.key", CAFile:"/Users/jenkins/minikube-integration/19429-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105da7e30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 17:34:03.407491    4376 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-967000"
	W0813 17:34:03.407495    4376 addons.go:243] addon default-storageclass should already be in state true
	I0813 17:34:03.407501    4376 host.go:66] Checking if "stopped-upgrade-967000" exists ...
	I0813 17:34:03.410183    4376 out.go:177] * Verifying Kubernetes components...
	I0813 17:34:03.410621    4376 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 17:34:03.414408    4376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 17:34:03.414415    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	I0813 17:34:03.418185    4376 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 17:34:03.422232    4376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 17:34:03.426092    4376 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 17:34:03.426100    4376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 17:34:03.426107    4376 sshutil.go:53] new ssh client: &{IP:localhost Port:50444 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/stopped-upgrade-967000/id_rsa Username:docker}
	I0813 17:34:03.518780    4376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 17:34:03.524255    4376 api_server.go:52] waiting for apiserver process to appear ...
	I0813 17:34:03.524327    4376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 17:34:03.531353    4376 api_server.go:72] duration metric: took 125.182083ms to wait for apiserver process to appear ...
	I0813 17:34:03.531366    4376 api_server.go:88] waiting for apiserver healthz status ...
	I0813 17:34:03.531375    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:03.535827    4376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 17:34:03.544666    4376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 17:34:03.890896    4376 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0813 17:34:03.890908    4376 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0813 17:34:08.533356    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:08.533376    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:13.533913    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:13.533932    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:18.534453    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:18.534474    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:23.535072    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:23.535099    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:28.535996    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:28.536021    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:33.536908    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:33.536944    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0813 17:34:33.892809    4376 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0813 17:34:33.897222    4376 out.go:177] * Enabled addons: storage-provisioner
	I0813 17:34:33.904037    4376 addons.go:510] duration metric: took 30.498397875s for enable addons: enabled=[storage-provisioner]
	I0813 17:34:38.538147    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:38.538188    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:43.539729    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:43.539751    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:48.540684    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:48.540730    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:53.541543    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:53.541585    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:34:58.543756    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:34:58.543777    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:35:03.545875    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:35:03.545970    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:35:03.557147    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:35:03.557227    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:35:03.567565    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:35:03.567645    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:35:03.578247    4376 logs.go:276] 2 containers: [a603ad954de4 635f6b29c5f2]
	I0813 17:35:03.578314    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:35:03.589115    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:35:03.589197    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:35:03.600491    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:35:03.600565    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:35:03.610956    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:35:03.611029    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:35:03.621076    4376 logs.go:276] 0 containers: []
	W0813 17:35:03.621091    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:35:03.621154    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:35:03.631593    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:35:03.631609    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:35:03.631615    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:35:03.635865    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:35:03.635872    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:35:03.673834    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:35:03.673846    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:35:03.689087    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:35:03.689098    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:35:03.704603    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:35:03.704614    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:35:03.716965    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:35:03.716978    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:35:03.736629    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:35:03.736639    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:35:03.748028    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:35:03.748039    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:35:03.781916    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:35:03.781925    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:35:03.797370    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:35:03.797383    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:35:03.812694    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:35:03.812704    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:35:03.828001    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:35:03.828012    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:35:03.852477    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:35:03.852486    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:35:06.364582    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:35:11.366718    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:35:11.366848    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:35:11.380343    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:35:11.380424    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:35:11.391343    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:35:11.391428    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:35:11.401497    4376 logs.go:276] 2 containers: [a603ad954de4 635f6b29c5f2]
	I0813 17:35:11.401576    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:35:11.411938    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:35:11.412008    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:35:11.422354    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:35:11.422428    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:35:11.433200    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:35:11.433273    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:35:11.443441    4376 logs.go:276] 0 containers: []
	W0813 17:35:11.443453    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:35:11.443522    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:35:11.453980    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:35:11.453996    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:35:11.454005    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:35:11.458330    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:35:11.458336    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:35:11.495254    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:35:11.495265    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:35:11.507445    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:35:11.507457    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:35:11.518914    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:35:11.518925    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:35:11.534166    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:35:11.534176    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:35:11.546072    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:35:11.546084    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:35:11.563462    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:35:11.563471    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:35:11.574804    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:35:11.574816    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:35:11.611527    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:35:11.611544    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:35:11.625758    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:35:11.625768    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:35:11.639947    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:35:11.639957    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:35:11.664800    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:35:11.664812    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:35:14.179674    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:35:19.182170    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:35:19.182626    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:35:19.224029    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:35:19.224166    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:35:19.243921    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:35:19.244027    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:35:19.258702    4376 logs.go:276] 2 containers: [a603ad954de4 635f6b29c5f2]
	I0813 17:35:19.258780    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:35:19.272411    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:35:19.272490    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:35:19.284357    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:35:19.284436    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:35:19.294522    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:35:19.294597    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:35:19.304195    4376 logs.go:276] 0 containers: []
	W0813 17:35:19.304210    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:35:19.304274    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:35:19.314354    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:35:19.314368    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:35:19.314373    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:35:19.318629    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:35:19.318634    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:35:19.332505    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:35:19.332515    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:35:19.346675    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:35:19.346684    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:35:19.358366    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:35:19.358377    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:35:19.375617    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:35:19.375629    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:35:19.386664    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:35:19.386674    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:35:19.409708    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:35:19.409717    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:35:19.421216    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:35:19.421229    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:35:19.455884    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:35:19.455895    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:35:19.493642    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:35:19.493653    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:35:19.505196    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:35:19.505206    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:35:19.522295    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:35:19.522306    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:35:22.034956    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:35:27.036431    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:35:27.036485    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:35:27.047656    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:35:27.047718    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:35:27.058363    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:35:27.058432    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:35:27.068751    4376 logs.go:276] 2 containers: [a603ad954de4 635f6b29c5f2]
	I0813 17:35:27.068809    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:35:27.080029    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:35:27.080082    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:35:27.091806    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:35:27.091901    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:35:27.103440    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:35:27.103491    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:35:27.114468    4376 logs.go:276] 0 containers: []
	W0813 17:35:27.114482    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:35:27.114536    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:35:27.125633    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:35:27.125647    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:35:27.125654    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:35:27.140968    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:35:27.140977    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:35:27.152702    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:35:27.152711    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:35:27.165886    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:35:27.165896    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:35:27.184127    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:35:27.184140    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:35:27.210611    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:35:27.210624    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:35:27.215124    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:35:27.215133    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:35:27.250591    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:35:27.250600    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:35:27.264798    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:35:27.264808    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:35:27.277472    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:35:27.277484    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:35:27.289196    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:35:27.289205    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:35:27.325837    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:35:27.325856    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:35:27.339647    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:35:27.339662    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:35:29.865330    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:35:34.867855    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:35:34.867943    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:35:34.879165    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:35:34.879236    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:35:34.890041    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:35:34.890110    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:35:34.900818    4376 logs.go:276] 2 containers: [a603ad954de4 635f6b29c5f2]
	I0813 17:35:34.900863    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:35:34.912813    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:35:34.912860    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:35:34.924296    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:35:34.924362    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:35:34.935247    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:35:34.935320    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:35:34.947994    4376 logs.go:276] 0 containers: []
	W0813 17:35:34.948006    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:35:34.948069    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:35:34.958417    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:35:34.958431    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:35:34.958437    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:35:34.976756    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:35:34.976768    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:35:34.988738    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:35:34.988748    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:35:35.022837    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:35:35.022847    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:35:35.027047    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:35:35.027053    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:35:35.042412    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:35:35.042422    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:35:35.063751    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:35:35.063765    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:35:35.079687    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:35:35.079698    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:35:35.091139    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:35:35.091151    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:35:35.125492    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:35:35.125502    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:35:35.139713    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:35:35.139722    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:35:35.151191    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:35:35.151200    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:35:35.165521    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:35:35.165529    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:35:37.691204    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:35:42.693659    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:35:42.693733    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:35:42.706468    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:35:42.706537    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:35:42.717032    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:35:42.717099    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:35:42.735795    4376 logs.go:276] 2 containers: [a603ad954de4 635f6b29c5f2]
	I0813 17:35:42.735864    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:35:42.746943    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:35:42.747033    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:35:42.757151    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:35:42.757222    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:35:42.767481    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:35:42.767556    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:35:42.777480    4376 logs.go:276] 0 containers: []
	W0813 17:35:42.777492    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:35:42.777562    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:35:42.788349    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:35:42.788363    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:35:42.788369    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:35:42.799693    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:35:42.799703    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:35:42.810833    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:35:42.810844    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:35:42.828217    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:35:42.828226    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:35:42.851536    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:35:42.851545    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:35:42.862599    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:35:42.862611    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:35:42.896491    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:35:42.896500    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:35:42.915478    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:35:42.915490    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:35:42.927329    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:35:42.927340    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:35:42.941972    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:35:42.941981    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:35:42.957672    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:35:42.957682    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:35:42.962305    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:35:42.962311    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:35:43.000852    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:35:43.000863    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:35:45.518148    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:35:50.520621    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:35:50.521005    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:35:50.558887    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:35:50.559037    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:35:50.579650    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:35:50.579749    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:35:50.593997    4376 logs.go:276] 2 containers: [a603ad954de4 635f6b29c5f2]
	I0813 17:35:50.594073    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:35:50.606580    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:35:50.606666    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:35:50.616982    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:35:50.617051    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:35:50.630245    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:35:50.630305    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:35:50.640145    4376 logs.go:276] 0 containers: []
	W0813 17:35:50.640154    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:35:50.640211    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:35:50.650188    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:35:50.650204    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:35:50.650209    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:35:50.664944    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:35:50.664953    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:35:50.676332    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:35:50.676341    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:35:50.699243    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:35:50.699252    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:35:50.713787    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:35:50.713796    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:35:50.748471    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:35:50.748483    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:35:50.762601    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:35:50.762612    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:35:50.776061    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:35:50.776070    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:35:50.787267    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:35:50.787277    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:35:50.804292    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:35:50.804302    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:35:50.816177    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:35:50.816186    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:35:50.851484    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:35:50.851502    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:35:50.856297    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:35:50.856307    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:35:53.369954    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:35:58.372300    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:35:58.372669    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:35:58.409122    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:35:58.409251    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:35:58.430326    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:35:58.430433    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:35:58.444911    4376 logs.go:276] 2 containers: [a603ad954de4 635f6b29c5f2]
	I0813 17:35:58.444990    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:35:58.457337    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:35:58.457411    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:35:58.468039    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:35:58.468107    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:35:58.478565    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:35:58.478647    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:35:58.488495    4376 logs.go:276] 0 containers: []
	W0813 17:35:58.488512    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:35:58.488579    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:35:58.507040    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:35:58.507055    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:35:58.507060    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:35:58.511223    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:35:58.511229    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:35:58.524798    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:35:58.524811    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:35:58.540083    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:35:58.540094    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:35:58.551178    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:35:58.551188    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:35:58.571626    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:35:58.571636    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:35:58.605335    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:35:58.605341    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:35:58.618306    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:35:58.618316    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:35:58.630348    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:35:58.630360    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:35:58.645261    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:35:58.645271    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:35:58.659786    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:35:58.659795    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:35:58.684162    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:35:58.684173    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:35:58.696209    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:35:58.696222    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:36:01.235736    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:36:06.237965    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:36:06.238282    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:36:06.268109    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:36:06.268254    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:36:06.289508    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:36:06.289609    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:36:06.307785    4376 logs.go:276] 2 containers: [a603ad954de4 635f6b29c5f2]
	I0813 17:36:06.307876    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:36:06.318993    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:36:06.319082    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:36:06.329070    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:36:06.329150    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:36:06.339248    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:36:06.339317    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:36:06.349113    4376 logs.go:276] 0 containers: []
	W0813 17:36:06.349132    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:36:06.349196    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:36:06.359163    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:36:06.359178    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:36:06.359184    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:36:06.370482    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:36:06.370492    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:36:06.382399    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:36:06.382412    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:36:06.395516    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:36:06.395527    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:36:06.399879    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:36:06.399885    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:36:06.433369    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:36:06.433380    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:36:06.448017    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:36:06.448027    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:36:06.461893    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:36:06.461904    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:36:06.476476    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:36:06.476488    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:36:06.488442    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:36:06.488452    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:36:06.505538    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:36:06.505549    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:36:06.541892    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:36:06.541900    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:36:06.552970    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:36:06.552982    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:36:09.078604    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:36:14.081110    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:36:14.081415    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:36:14.118611    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:36:14.118756    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:36:14.138513    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:36:14.138623    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:36:14.152875    4376 logs.go:276] 2 containers: [a603ad954de4 635f6b29c5f2]
	I0813 17:36:14.152956    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:36:14.164601    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:36:14.164653    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:36:14.175321    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:36:14.175393    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:36:14.185474    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:36:14.185539    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:36:14.196875    4376 logs.go:276] 0 containers: []
	W0813 17:36:14.196885    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:36:14.196940    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:36:14.207560    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:36:14.207576    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:36:14.207582    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:36:14.247917    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:36:14.247927    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:36:14.266600    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:36:14.266613    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:36:14.281004    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:36:14.281015    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:36:14.307883    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:36:14.307896    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:36:14.354800    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:36:14.354820    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:36:14.371651    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:36:14.371662    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:36:14.412211    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:36:14.412230    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:36:14.417840    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:36:14.417851    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:36:14.438735    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:36:14.438746    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:36:14.450266    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:36:14.450277    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:36:14.464617    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:36:14.464628    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:36:14.475714    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:36:14.475725    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:36:17.003195    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:36:22.005770    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:36:22.006082    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:36:22.042563    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:36:22.042695    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:36:22.063182    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:36:22.063271    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:36:22.078217    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:36:22.078302    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:36:22.092105    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:36:22.092176    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:36:22.102765    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:36:22.102838    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:36:22.113378    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:36:22.113445    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:36:22.124091    4376 logs.go:276] 0 containers: []
	W0813 17:36:22.124104    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:36:22.124166    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:36:22.134327    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:36:22.134345    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:36:22.134351    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:36:22.145758    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:36:22.145770    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:36:22.160983    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:36:22.160993    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:36:22.175544    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:36:22.175557    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:36:22.187928    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:36:22.187939    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:36:22.224871    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:36:22.224882    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:36:22.243311    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:36:22.243321    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:36:22.255079    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:36:22.255090    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:36:22.274129    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:36:22.274142    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:36:22.299236    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:36:22.299245    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:36:22.311306    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:36:22.311318    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:36:22.346909    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:36:22.346918    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:36:22.351094    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:36:22.351099    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:36:22.365189    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:36:22.365199    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:36:22.376883    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:36:22.376893    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:36:24.890668    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:36:29.893364    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:36:29.893713    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:36:29.941629    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:36:29.941786    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:36:29.961097    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:36:29.961200    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:36:29.976163    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:36:29.976242    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:36:29.988149    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:36:29.988212    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:36:30.002909    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:36:30.002983    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:36:30.013302    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:36:30.013377    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:36:30.023440    4376 logs.go:276] 0 containers: []
	W0813 17:36:30.023450    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:36:30.023518    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:36:30.033998    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:36:30.034015    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:36:30.034020    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:36:30.045635    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:36:30.045647    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:36:30.059749    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:36:30.059760    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:36:30.071552    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:36:30.071562    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:36:30.082629    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:36:30.082639    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:36:30.105816    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:36:30.105822    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:36:30.117069    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:36:30.117079    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:36:30.134213    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:36:30.134224    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:36:30.168468    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:36:30.168476    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:36:30.183051    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:36:30.183063    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:36:30.194650    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:36:30.194663    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:36:30.198790    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:36:30.198797    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:36:30.233397    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:36:30.233407    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:36:30.251618    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:36:30.251630    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:36:30.262802    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:36:30.262811    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:36:32.779406    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:36:37.781966    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:36:37.782056    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:36:37.798236    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:36:37.798293    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:36:37.809829    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:36:37.809890    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:36:37.820692    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:36:37.820771    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:36:37.832155    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:36:37.832202    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:36:37.844222    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:36:37.844277    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:36:37.855216    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:36:37.855280    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:36:37.865957    4376 logs.go:276] 0 containers: []
	W0813 17:36:37.865970    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:36:37.866013    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:36:37.878852    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:36:37.878873    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:36:37.878879    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:36:37.916357    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:36:37.916368    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:36:37.929468    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:36:37.929480    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:36:37.941806    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:36:37.941818    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:36:37.957415    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:36:37.957426    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:36:37.970750    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:36:37.970760    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:36:37.975276    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:36:37.975283    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:36:37.987118    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:36:37.987167    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:36:38.001689    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:36:38.001700    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:36:38.022661    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:36:38.022670    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:36:38.041335    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:36:38.041347    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:36:38.067291    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:36:38.067301    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:36:38.079888    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:36:38.079899    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:36:38.118172    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:36:38.118188    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:36:38.134151    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:36:38.134168    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:36:40.651456    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:36:45.654018    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:36:45.654221    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:36:45.675797    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:36:45.675904    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:36:45.690965    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:36:45.691054    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:36:45.702934    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:36:45.703004    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:36:45.713929    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:36:45.714001    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:36:45.724419    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:36:45.724479    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:36:45.742336    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:36:45.742413    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:36:45.752861    4376 logs.go:276] 0 containers: []
	W0813 17:36:45.752874    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:36:45.752929    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:36:45.763226    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:36:45.763244    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:36:45.763250    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:36:45.775378    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:36:45.775389    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:36:45.787116    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:36:45.787125    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:36:45.791423    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:36:45.791430    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:36:45.805844    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:36:45.805854    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:36:45.820094    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:36:45.820104    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:36:45.831345    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:36:45.831355    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:36:45.843655    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:36:45.843665    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:36:45.860639    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:36:45.860650    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:36:45.876432    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:36:45.876442    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:36:45.900483    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:36:45.900490    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:36:45.915093    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:36:45.915103    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:36:45.926600    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:36:45.926610    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:36:45.962654    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:36:45.962660    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:36:45.996389    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:36:45.996400    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:36:48.513577    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:36:53.515694    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:36:53.515807    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:36:53.530425    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:36:53.530507    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:36:53.543274    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:36:53.543345    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:36:53.554276    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:36:53.554349    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:36:53.564741    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:36:53.564799    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:36:53.575730    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:36:53.575796    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:36:53.586535    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:36:53.586597    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:36:53.596891    4376 logs.go:276] 0 containers: []
	W0813 17:36:53.596903    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:36:53.596960    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:36:53.607396    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:36:53.607415    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:36:53.607421    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:36:53.630699    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:36:53.630705    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:36:53.664400    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:36:53.664407    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:36:53.679214    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:36:53.679222    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:36:53.693644    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:36:53.693654    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:36:53.705227    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:36:53.705238    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:36:53.716425    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:36:53.716434    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:36:53.741271    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:36:53.741281    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:36:53.764275    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:36:53.764286    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:36:53.775708    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:36:53.775718    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:36:53.787344    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:36:53.787357    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:36:53.799672    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:36:53.799681    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:36:53.803882    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:36:53.803889    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:36:53.844430    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:36:53.844441    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:36:53.858205    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:36:53.858215    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:36:56.372454    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:37:01.372905    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:37:01.373001    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:37:01.384706    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:37:01.384778    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:37:01.419198    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:37:01.419276    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:37:01.439071    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:37:01.439145    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:37:01.451039    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:37:01.451095    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:37:01.463673    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:37:01.463748    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:37:01.475595    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:37:01.475660    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:37:01.488452    4376 logs.go:276] 0 containers: []
	W0813 17:37:01.488465    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:37:01.488507    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:37:01.499943    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:37:01.499958    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:37:01.499964    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:37:01.512042    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:37:01.512056    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:37:01.528200    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:37:01.528226    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:37:01.543389    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:37:01.543400    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:37:01.555814    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:37:01.555825    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:37:01.575361    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:37:01.575374    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:37:01.587772    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:37:01.587783    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:37:01.599789    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:37:01.599801    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:37:01.624626    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:37:01.624638    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:37:01.629250    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:37:01.629256    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:37:01.644388    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:37:01.644400    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:37:01.656595    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:37:01.656606    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:37:01.670143    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:37:01.670155    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:37:01.685544    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:37:01.685555    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:37:01.722312    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:37:01.722323    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:37:04.264096    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:37:09.266168    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:37:09.266440    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:37:09.301487    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:37:09.301629    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:37:09.321817    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:37:09.321924    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:37:09.336844    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:37:09.336929    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:37:09.349240    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:37:09.349322    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:37:09.359704    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:37:09.359777    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:37:09.370628    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:37:09.370693    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:37:09.380834    4376 logs.go:276] 0 containers: []
	W0813 17:37:09.380846    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:37:09.380910    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:37:09.391584    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:37:09.391604    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:37:09.391609    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:37:09.402805    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:37:09.402815    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:37:09.439036    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:37:09.439045    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:37:09.443276    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:37:09.443283    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:37:09.457861    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:37:09.457871    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:37:09.469348    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:37:09.469359    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:37:09.486631    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:37:09.486640    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:37:09.521646    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:37:09.521657    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:37:09.533351    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:37:09.533365    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:37:09.548402    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:37:09.548413    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:37:09.569122    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:37:09.569132    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:37:09.582692    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:37:09.582703    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:37:09.594013    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:37:09.594023    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:37:09.611671    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:37:09.611684    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:37:09.623507    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:37:09.623516    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:37:12.150701    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:37:17.153309    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:37:17.153623    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:37:17.191789    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:37:17.191922    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:37:17.213131    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:37:17.213218    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:37:17.227560    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:37:17.227635    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:37:17.239016    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:37:17.239096    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:37:17.249939    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:37:17.250011    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:37:17.264164    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:37:17.264246    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:37:17.275548    4376 logs.go:276] 0 containers: []
	W0813 17:37:17.275559    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:37:17.275618    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:37:17.285854    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:37:17.285870    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:37:17.285876    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:37:17.305965    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:37:17.305975    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:37:17.340712    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:37:17.340724    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:37:17.352771    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:37:17.352780    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:37:17.368461    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:37:17.368469    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:37:17.379833    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:37:17.379842    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:37:17.394977    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:37:17.394989    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:37:17.406387    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:37:17.406399    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:37:17.431033    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:37:17.431040    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:37:17.446071    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:37:17.446082    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:37:17.466110    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:37:17.466121    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:37:17.483305    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:37:17.483317    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:37:17.495273    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:37:17.495283    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:37:17.507604    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:37:17.507614    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:37:17.541535    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:37:17.541546    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:37:20.048007    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:37:25.049861    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:37:25.049957    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:37:25.061765    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:37:25.061840    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:37:25.074472    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:37:25.074544    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:37:25.086636    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:37:25.086694    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:37:25.099760    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:37:25.099838    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:37:25.110783    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:37:25.110861    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:37:25.122511    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:37:25.122574    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:37:25.133937    4376 logs.go:276] 0 containers: []
	W0813 17:37:25.133947    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:37:25.133998    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:37:25.151480    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:37:25.151495    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:37:25.151500    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:37:25.175726    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:37:25.175742    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:37:25.193006    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:37:25.193015    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:37:25.197377    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:37:25.197383    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:37:25.213678    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:37:25.213688    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:37:25.227512    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:37:25.227528    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:37:25.255236    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:37:25.255248    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:37:25.269047    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:37:25.269058    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:37:25.308548    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:37:25.308563    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:37:25.323899    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:37:25.323907    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:37:25.338548    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:37:25.338561    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:37:25.350771    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:37:25.350783    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:37:25.368515    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:37:25.368527    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:37:25.381186    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:37:25.381198    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:37:25.419357    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:37:25.419370    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:37:27.937078    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:37:32.939393    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:37:32.939721    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:37:32.978022    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:37:32.978169    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:37:32.999077    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:37:32.999180    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:37:33.014720    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:37:33.014817    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:37:33.027347    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:37:33.027420    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:37:33.037995    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:37:33.038075    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:37:33.052381    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:37:33.052452    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:37:33.063523    4376 logs.go:276] 0 containers: []
	W0813 17:37:33.063541    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:37:33.063612    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:37:33.073955    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:37:33.073979    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:37:33.073984    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:37:33.108385    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:37:33.108393    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:37:33.120078    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:37:33.120090    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:37:33.132649    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:37:33.132658    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:37:33.168148    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:37:33.168159    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:37:33.181175    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:37:33.181186    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:37:33.193220    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:37:33.193231    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:37:33.197540    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:37:33.197547    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:37:33.211406    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:37:33.211417    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:37:33.235317    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:37:33.235328    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:37:33.252356    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:37:33.252369    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:37:33.276151    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:37:33.276161    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:37:33.290065    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:37:33.290076    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:37:33.301867    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:37:33.301878    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:37:33.314506    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:37:33.314516    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:37:35.832367    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:37:40.834963    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:37:40.835290    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:37:40.879240    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:37:40.879366    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:37:40.898091    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:37:40.898191    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:37:40.912159    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:37:40.912238    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:37:40.924046    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:37:40.924118    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:37:40.934084    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:37:40.934152    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:37:40.948865    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:37:40.948936    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:37:40.959307    4376 logs.go:276] 0 containers: []
	W0813 17:37:40.959317    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:37:40.959382    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:37:40.970266    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:37:40.970286    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:37:40.970292    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:37:41.006684    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:37:41.006693    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:37:41.023060    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:37:41.023070    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:37:41.039032    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:37:41.039043    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:37:41.054318    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:37:41.054330    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:37:41.065980    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:37:41.065991    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:37:41.083119    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:37:41.083128    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:37:41.094571    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:37:41.094582    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:37:41.105571    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:37:41.105584    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:37:41.118091    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:37:41.118101    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:37:41.152632    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:37:41.152643    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:37:41.166700    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:37:41.166708    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:37:41.196235    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:37:41.196244    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:37:41.200836    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:37:41.200842    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:37:41.212399    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:37:41.212410    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:37:43.739393    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:37:48.741684    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:37:48.741787    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:37:48.753307    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:37:48.753368    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:37:48.765271    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:37:48.765343    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:37:48.777259    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:37:48.777345    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:37:48.790596    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:37:48.790655    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:37:48.802939    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:37:48.803015    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:37:48.814962    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:37:48.815034    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:37:48.826470    4376 logs.go:276] 0 containers: []
	W0813 17:37:48.826484    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:37:48.826556    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:37:48.837234    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:37:48.837253    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:37:48.837260    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:37:48.873610    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:37:48.873627    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:37:48.893289    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:37:48.893301    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:37:48.919391    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:37:48.919405    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:37:48.933177    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:37:48.933189    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:37:48.946286    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:37:48.946298    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:37:48.958761    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:37:48.958775    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:37:48.974697    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:37:48.974713    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:37:48.990900    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:37:48.990911    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:37:49.003288    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:37:49.003299    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:37:49.020889    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:37:49.020901    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:37:49.038310    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:37:49.038320    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:37:49.043111    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:37:49.043118    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:37:49.083943    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:37:49.083966    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:37:49.096919    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:37:49.096929    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:37:51.611447    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:37:56.612821    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:37:56.613201    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0813 17:37:56.654564    4376 logs.go:276] 1 containers: [84eb595689b8]
	I0813 17:37:56.654709    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0813 17:37:56.676433    4376 logs.go:276] 1 containers: [f6b849b40c18]
	I0813 17:37:56.676542    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0813 17:37:56.691461    4376 logs.go:276] 4 containers: [825a106c46f0 ebcf3cf91752 a603ad954de4 635f6b29c5f2]
	I0813 17:37:56.691545    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0813 17:37:56.705395    4376 logs.go:276] 1 containers: [fbd9cf0ccb74]
	I0813 17:37:56.705471    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0813 17:37:56.716386    4376 logs.go:276] 1 containers: [f0e3dfdbe54f]
	I0813 17:37:56.716459    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0813 17:37:56.727970    4376 logs.go:276] 1 containers: [1d0b78d62f22]
	I0813 17:37:56.728044    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0813 17:37:56.746537    4376 logs.go:276] 0 containers: []
	W0813 17:37:56.746551    4376 logs.go:278] No container was found matching "kindnet"
	I0813 17:37:56.746609    4376 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0813 17:37:56.757251    4376 logs.go:276] 1 containers: [ecc88d9761cf]
	I0813 17:37:56.757269    4376 logs.go:123] Gathering logs for describe nodes ...
	I0813 17:37:56.757274    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 17:37:56.791703    4376 logs.go:123] Gathering logs for coredns [a603ad954de4] ...
	I0813 17:37:56.791715    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a603ad954de4"
	I0813 17:37:56.803718    4376 logs.go:123] Gathering logs for coredns [635f6b29c5f2] ...
	I0813 17:37:56.803728    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 635f6b29c5f2"
	I0813 17:37:56.816652    4376 logs.go:123] Gathering logs for Docker ...
	I0813 17:37:56.816663    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0813 17:37:56.841050    4376 logs.go:123] Gathering logs for container status ...
	I0813 17:37:56.841058    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 17:37:56.852809    4376 logs.go:123] Gathering logs for dmesg ...
	I0813 17:37:56.852820    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 17:37:56.857647    4376 logs.go:123] Gathering logs for etcd [f6b849b40c18] ...
	I0813 17:37:56.857654    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b849b40c18"
	I0813 17:37:56.871661    4376 logs.go:123] Gathering logs for coredns [ebcf3cf91752] ...
	I0813 17:37:56.871670    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcf3cf91752"
	I0813 17:37:56.882847    4376 logs.go:123] Gathering logs for kubelet ...
	I0813 17:37:56.882856    4376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 17:37:56.916704    4376 logs.go:123] Gathering logs for coredns [825a106c46f0] ...
	I0813 17:37:56.916718    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825a106c46f0"
	I0813 17:37:56.928979    4376 logs.go:123] Gathering logs for storage-provisioner [ecc88d9761cf] ...
	I0813 17:37:56.928990    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecc88d9761cf"
	I0813 17:37:56.940463    4376 logs.go:123] Gathering logs for kube-controller-manager [1d0b78d62f22] ...
	I0813 17:37:56.940474    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d0b78d62f22"
	I0813 17:37:56.962429    4376 logs.go:123] Gathering logs for kube-apiserver [84eb595689b8] ...
	I0813 17:37:56.962437    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84eb595689b8"
	I0813 17:37:56.977684    4376 logs.go:123] Gathering logs for kube-scheduler [fbd9cf0ccb74] ...
	I0813 17:37:56.977693    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbd9cf0ccb74"
	I0813 17:37:56.992247    4376 logs.go:123] Gathering logs for kube-proxy [f0e3dfdbe54f] ...
	I0813 17:37:56.992258    4376 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0e3dfdbe54f"
	I0813 17:37:59.505950    4376 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0813 17:38:04.508221    4376 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 17:38:04.514746    4376 out.go:177] 
	W0813 17:38:04.519674    4376 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0813 17:38:04.519686    4376 out.go:239] * 
	* 
	W0813 17:38:04.520322    4376 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:38:04.529626    4376 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-967000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (571.95s)

                                                
                                    
x
+
TestPause/serial/Start (9.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-253000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-253000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.933779958s)

                                                
                                                
-- stdout --
	* [pause-253000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-253000" primary control-plane node in "pause-253000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-253000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-253000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-253000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-253000 -n pause-253000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-253000 -n pause-253000: exit status 7 (53.028625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-253000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-702000 --driver=qemu2 
E0813 17:35:36.774949    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-702000 --driver=qemu2 : exit status 80 (9.853701959s)

                                                
                                                
-- stdout --
	* [NoKubernetes-702000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-702000" primary control-plane node in "NoKubernetes-702000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-702000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-702000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-702000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-702000 -n NoKubernetes-702000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-702000 -n NoKubernetes-702000: exit status 7 (56.658625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-702000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-702000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-702000 --no-kubernetes --driver=qemu2 : exit status 80 (5.36338325s)

                                                
                                                
-- stdout --
	* [NoKubernetes-702000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-702000
	* Restarting existing qemu2 VM for "NoKubernetes-702000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-702000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-702000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-702000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-702000 -n NoKubernetes-702000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-702000 -n NoKubernetes-702000: exit status 7 (67.160375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-702000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-702000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-702000 --no-kubernetes --driver=qemu2 : exit status 80 (5.269220916s)

                                                
                                                
-- stdout --
	* [NoKubernetes-702000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-702000
	* Restarting existing qemu2 VM for "NoKubernetes-702000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-702000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-702000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-702000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-702000 -n NoKubernetes-702000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-702000 -n NoKubernetes-702000: exit status 7 (66.007375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-702000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-702000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-702000 --driver=qemu2 : exit status 80 (5.264391833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-702000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-702000
	* Restarting existing qemu2 VM for "NoKubernetes-702000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-702000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-702000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-702000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-702000 -n NoKubernetes-702000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-702000 -n NoKubernetes-702000: exit status 7 (29.531375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-702000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.878906833s)

                                                
                                                
-- stdout --
	* [auto-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-986000" primary control-plane node in "auto-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:36:26.483561    5124 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:36:26.483691    5124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:36:26.483694    5124 out.go:304] Setting ErrFile to fd 2...
	I0813 17:36:26.483697    5124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:36:26.483841    5124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:36:26.484922    5124 out.go:298] Setting JSON to false
	I0813 17:36:26.501558    5124 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3950,"bootTime":1723591836,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:36:26.501640    5124 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:36:26.506341    5124 out.go:177] * [auto-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:36:26.514285    5124 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:36:26.514318    5124 notify.go:220] Checking for updates...
	I0813 17:36:26.520322    5124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:36:26.523286    5124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:36:26.526271    5124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:36:26.529286    5124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:36:26.532262    5124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:36:26.535596    5124 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:36:26.535667    5124 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:36:26.535713    5124 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:36:26.539273    5124 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:36:26.546261    5124 start.go:297] selected driver: qemu2
	I0813 17:36:26.546268    5124 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:36:26.546274    5124 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:36:26.548370    5124 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:36:26.551288    5124 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:36:26.554333    5124 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:36:26.554371    5124 cni.go:84] Creating CNI manager for ""
	I0813 17:36:26.554379    5124 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:36:26.554385    5124 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:36:26.554416    5124 start.go:340] cluster config:
	{Name:auto-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:36:26.557956    5124 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:36:26.565244    5124 out.go:177] * Starting "auto-986000" primary control-plane node in "auto-986000" cluster
	I0813 17:36:26.569283    5124 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:36:26.569305    5124 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:36:26.569313    5124 cache.go:56] Caching tarball of preloaded images
	I0813 17:36:26.569361    5124 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:36:26.569366    5124 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:36:26.569418    5124 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/auto-986000/config.json ...
	I0813 17:36:26.569428    5124 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/auto-986000/config.json: {Name:mk034e48cb172f0f295651ce566e409dd21174b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:36:26.569715    5124 start.go:360] acquireMachinesLock for auto-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:36:26.569744    5124 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "auto-986000"
	I0813 17:36:26.569755    5124 start.go:93] Provisioning new machine with config: &{Name:auto-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:36:26.569831    5124 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:36:26.578279    5124 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:36:26.593006    5124 start.go:159] libmachine.API.Create for "auto-986000" (driver="qemu2")
	I0813 17:36:26.593070    5124 client.go:168] LocalClient.Create starting
	I0813 17:36:26.593140    5124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:36:26.593175    5124 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:26.593183    5124 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:26.593227    5124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:36:26.593250    5124 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:26.593267    5124 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:26.593668    5124 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:36:26.736279    5124 main.go:141] libmachine: Creating SSH key...
	I0813 17:36:26.856133    5124 main.go:141] libmachine: Creating Disk image...
	I0813 17:36:26.856142    5124 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:36:26.856350    5124 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2
	I0813 17:36:26.865822    5124 main.go:141] libmachine: STDOUT: 
	I0813 17:36:26.865843    5124 main.go:141] libmachine: STDERR: 
	I0813 17:36:26.865886    5124 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2 +20000M
	I0813 17:36:26.874048    5124 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:36:26.874067    5124 main.go:141] libmachine: STDERR: 
	I0813 17:36:26.874080    5124 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2
	I0813 17:36:26.874087    5124 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:36:26.874101    5124 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:36:26.874127    5124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:31:ff:2f:4d:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2
	I0813 17:36:26.875860    5124 main.go:141] libmachine: STDOUT: 
	I0813 17:36:26.875877    5124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:36:26.875897    5124 client.go:171] duration metric: took 282.824958ms to LocalClient.Create
	I0813 17:36:28.878076    5124 start.go:128] duration metric: took 2.308252833s to createHost
	I0813 17:36:28.878176    5124 start.go:83] releasing machines lock for "auto-986000", held for 2.308461708s
	W0813 17:36:28.878244    5124 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:36:28.892661    5124 out.go:177] * Deleting "auto-986000" in qemu2 ...
	W0813 17:36:28.921771    5124 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:36:28.921816    5124 start.go:729] Will try again in 5 seconds ...
	I0813 17:36:33.924023    5124 start.go:360] acquireMachinesLock for auto-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:36:33.924597    5124 start.go:364] duration metric: took 444.834µs to acquireMachinesLock for "auto-986000"
	I0813 17:36:33.924731    5124 start.go:93] Provisioning new machine with config: &{Name:auto-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:36:33.925060    5124 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:36:33.930739    5124 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:36:33.981054    5124 start.go:159] libmachine.API.Create for "auto-986000" (driver="qemu2")
	I0813 17:36:33.981116    5124 client.go:168] LocalClient.Create starting
	I0813 17:36:33.981266    5124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:36:33.981332    5124 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:33.981353    5124 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:33.981415    5124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:36:33.981460    5124 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:33.981478    5124 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:33.982115    5124 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:36:34.136263    5124 main.go:141] libmachine: Creating SSH key...
	I0813 17:36:34.266905    5124 main.go:141] libmachine: Creating Disk image...
	I0813 17:36:34.266915    5124 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:36:34.267128    5124 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2
	I0813 17:36:34.277097    5124 main.go:141] libmachine: STDOUT: 
	I0813 17:36:34.277119    5124 main.go:141] libmachine: STDERR: 
	I0813 17:36:34.277174    5124 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2 +20000M
	I0813 17:36:34.285407    5124 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:36:34.285423    5124 main.go:141] libmachine: STDERR: 
	I0813 17:36:34.285432    5124 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2
	I0813 17:36:34.285436    5124 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:36:34.285445    5124 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:36:34.285486    5124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:17:9a:03:43:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/auto-986000/disk.qcow2
	I0813 17:36:34.287077    5124 main.go:141] libmachine: STDOUT: 
	I0813 17:36:34.287097    5124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:36:34.287110    5124 client.go:171] duration metric: took 305.993083ms to LocalClient.Create
	I0813 17:36:36.289242    5124 start.go:128] duration metric: took 2.364182917s to createHost
	I0813 17:36:36.289328    5124 start.go:83] releasing machines lock for "auto-986000", held for 2.364748375s
	W0813 17:36:36.289618    5124 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:36:36.308216    5124 out.go:177] 
	W0813 17:36:36.312292    5124 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:36:36.312324    5124 out.go:239] * 
	* 
	W0813 17:36:36.314035    5124 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:36:36.323230    5124 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.847548625s)

                                                
                                                
-- stdout --
	* [kindnet-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-986000" primary control-plane node in "kindnet-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:36:38.472630    5235 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:36:38.472755    5235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:36:38.472759    5235 out.go:304] Setting ErrFile to fd 2...
	I0813 17:36:38.472761    5235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:36:38.472879    5235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:36:38.473981    5235 out.go:298] Setting JSON to false
	I0813 17:36:38.491098    5235 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3962,"bootTime":1723591836,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:36:38.491161    5235 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:36:38.495794    5235 out.go:177] * [kindnet-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:36:38.500189    5235 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:36:38.500225    5235 notify.go:220] Checking for updates...
	I0813 17:36:38.508138    5235 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:36:38.511159    5235 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:36:38.514159    5235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:36:38.517163    5235 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:36:38.520126    5235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:36:38.523539    5235 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:36:38.523624    5235 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:36:38.523683    5235 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:36:38.528146    5235 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:36:38.535182    5235 start.go:297] selected driver: qemu2
	I0813 17:36:38.535190    5235 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:36:38.535196    5235 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:36:38.537558    5235 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:36:38.540130    5235 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:36:38.543196    5235 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:36:38.543233    5235 cni.go:84] Creating CNI manager for "kindnet"
	I0813 17:36:38.543240    5235 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 17:36:38.543263    5235 start.go:340] cluster config:
	{Name:kindnet-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:36:38.546711    5235 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:36:38.554142    5235 out.go:177] * Starting "kindnet-986000" primary control-plane node in "kindnet-986000" cluster
	I0813 17:36:38.558160    5235 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:36:38.558178    5235 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:36:38.558187    5235 cache.go:56] Caching tarball of preloaded images
	I0813 17:36:38.558233    5235 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:36:38.558238    5235 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:36:38.558290    5235 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/kindnet-986000/config.json ...
	I0813 17:36:38.558300    5235 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/kindnet-986000/config.json: {Name:mkfbbde3fd0286e1c277daea7cd7a7047fcefbe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:36:38.558615    5235 start.go:360] acquireMachinesLock for kindnet-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:36:38.558645    5235 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "kindnet-986000"
	I0813 17:36:38.558662    5235 start.go:93] Provisioning new machine with config: &{Name:kindnet-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:36:38.558701    5235 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:36:38.566132    5235 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:36:38.581210    5235 start.go:159] libmachine.API.Create for "kindnet-986000" (driver="qemu2")
	I0813 17:36:38.581229    5235 client.go:168] LocalClient.Create starting
	I0813 17:36:38.581311    5235 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:36:38.581342    5235 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:38.581352    5235 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:38.581387    5235 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:36:38.581411    5235 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:38.581420    5235 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:38.581760    5235 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:36:38.724012    5235 main.go:141] libmachine: Creating SSH key...
	I0813 17:36:38.948982    5235 main.go:141] libmachine: Creating Disk image...
	I0813 17:36:38.948992    5235 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:36:38.949208    5235 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2
	I0813 17:36:38.958982    5235 main.go:141] libmachine: STDOUT: 
	I0813 17:36:38.959004    5235 main.go:141] libmachine: STDERR: 
	I0813 17:36:38.959065    5235 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2 +20000M
	I0813 17:36:38.967135    5235 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:36:38.967152    5235 main.go:141] libmachine: STDERR: 
	I0813 17:36:38.967170    5235 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2
	I0813 17:36:38.967174    5235 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:36:38.967191    5235 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:36:38.967219    5235 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:9e:7d:9f:c4:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2
	I0813 17:36:38.968914    5235 main.go:141] libmachine: STDOUT: 
	I0813 17:36:38.968930    5235 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:36:38.968950    5235 client.go:171] duration metric: took 387.722ms to LocalClient.Create
	I0813 17:36:40.971108    5235 start.go:128] duration metric: took 2.412417666s to createHost
	I0813 17:36:40.971174    5235 start.go:83] releasing machines lock for "kindnet-986000", held for 2.412562125s
	W0813 17:36:40.971234    5235 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:36:40.980972    5235 out.go:177] * Deleting "kindnet-986000" in qemu2 ...
	W0813 17:36:41.003894    5235 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:36:41.003912    5235 start.go:729] Will try again in 5 seconds ...
	I0813 17:36:46.005915    5235 start.go:360] acquireMachinesLock for kindnet-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:36:46.006038    5235 start.go:364] duration metric: took 93.958µs to acquireMachinesLock for "kindnet-986000"
	I0813 17:36:46.006063    5235 start.go:93] Provisioning new machine with config: &{Name:kindnet-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:36:46.006106    5235 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:36:46.016328    5235 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:36:46.031659    5235 start.go:159] libmachine.API.Create for "kindnet-986000" (driver="qemu2")
	I0813 17:36:46.031686    5235 client.go:168] LocalClient.Create starting
	I0813 17:36:46.031764    5235 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:36:46.031801    5235 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:46.031816    5235 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:46.031850    5235 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:36:46.031873    5235 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:46.031881    5235 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:46.032166    5235 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:36:46.175774    5235 main.go:141] libmachine: Creating SSH key...
	I0813 17:36:46.229486    5235 main.go:141] libmachine: Creating Disk image...
	I0813 17:36:46.229492    5235 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:36:46.229697    5235 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2
	I0813 17:36:46.239148    5235 main.go:141] libmachine: STDOUT: 
	I0813 17:36:46.239169    5235 main.go:141] libmachine: STDERR: 
	I0813 17:36:46.239218    5235 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2 +20000M
	I0813 17:36:46.247454    5235 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:36:46.247471    5235 main.go:141] libmachine: STDERR: 
	I0813 17:36:46.247483    5235 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2
	I0813 17:36:46.247488    5235 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:36:46.247499    5235 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:36:46.247531    5235 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:d9:99:02:51:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kindnet-986000/disk.qcow2
	I0813 17:36:46.249239    5235 main.go:141] libmachine: STDOUT: 
	I0813 17:36:46.249256    5235 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:36:46.249269    5235 client.go:171] duration metric: took 217.583125ms to LocalClient.Create
	I0813 17:36:48.251554    5235 start.go:128] duration metric: took 2.245357916s to createHost
	I0813 17:36:48.251637    5235 start.go:83] releasing machines lock for "kindnet-986000", held for 2.245625041s
	W0813 17:36:48.252011    5235 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:36:48.260571    5235 out.go:177] 
	W0813 17:36:48.265773    5235 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:36:48.265811    5235 out.go:239] * 
	* 
	W0813 17:36:48.268746    5235 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:36:48.277570    5235 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.74423475s)

                                                
                                                
-- stdout --
	* [flannel-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-986000" primary control-plane node in "flannel-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:36:50.484061    5362 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:36:50.484186    5362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:36:50.484189    5362 out.go:304] Setting ErrFile to fd 2...
	I0813 17:36:50.484191    5362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:36:50.484327    5362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:36:50.485676    5362 out.go:298] Setting JSON to false
	I0813 17:36:50.502847    5362 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3974,"bootTime":1723591836,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:36:50.502932    5362 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:36:50.508123    5362 out.go:177] * [flannel-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:36:50.516117    5362 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:36:50.516149    5362 notify.go:220] Checking for updates...
	I0813 17:36:50.523125    5362 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:36:50.526148    5362 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:36:50.530122    5362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:36:50.533123    5362 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:36:50.536134    5362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:36:50.539395    5362 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:36:50.539458    5362 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:36:50.539506    5362 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:36:50.543114    5362 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:36:50.550084    5362 start.go:297] selected driver: qemu2
	I0813 17:36:50.550093    5362 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:36:50.550099    5362 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:36:50.552365    5362 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:36:50.555100    5362 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:36:50.558181    5362 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:36:50.558198    5362 cni.go:84] Creating CNI manager for "flannel"
	I0813 17:36:50.558201    5362 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0813 17:36:50.558223    5362 start.go:340] cluster config:
	{Name:flannel-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:36:50.561743    5362 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:36:50.568974    5362 out.go:177] * Starting "flannel-986000" primary control-plane node in "flannel-986000" cluster
	I0813 17:36:50.573089    5362 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:36:50.573117    5362 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:36:50.573123    5362 cache.go:56] Caching tarball of preloaded images
	I0813 17:36:50.573188    5362 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:36:50.573194    5362 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:36:50.573251    5362 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/flannel-986000/config.json ...
	I0813 17:36:50.573265    5362 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/flannel-986000/config.json: {Name:mk9bcfc87312dee7995db19b73d241c7fc05be97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:36:50.573522    5362 start.go:360] acquireMachinesLock for flannel-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:36:50.573555    5362 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "flannel-986000"
	I0813 17:36:50.573566    5362 start.go:93] Provisioning new machine with config: &{Name:flannel-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:36:50.573600    5362 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:36:50.577042    5362 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:36:50.592822    5362 start.go:159] libmachine.API.Create for "flannel-986000" (driver="qemu2")
	I0813 17:36:50.592860    5362 client.go:168] LocalClient.Create starting
	I0813 17:36:50.592937    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:36:50.592967    5362 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:50.592976    5362 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:50.593013    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:36:50.593037    5362 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:50.593045    5362 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:50.593376    5362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:36:50.739660    5362 main.go:141] libmachine: Creating SSH key...
	I0813 17:36:50.795716    5362 main.go:141] libmachine: Creating Disk image...
	I0813 17:36:50.795725    5362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:36:50.795963    5362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2
	I0813 17:36:50.805560    5362 main.go:141] libmachine: STDOUT: 
	I0813 17:36:50.805582    5362 main.go:141] libmachine: STDERR: 
	I0813 17:36:50.805647    5362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2 +20000M
	I0813 17:36:50.814418    5362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:36:50.814437    5362 main.go:141] libmachine: STDERR: 
	I0813 17:36:50.814452    5362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2
	I0813 17:36:50.814456    5362 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:36:50.814469    5362 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:36:50.814506    5362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a1:c8:74:f9:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2
	I0813 17:36:50.816558    5362 main.go:141] libmachine: STDOUT: 
	I0813 17:36:50.816579    5362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:36:50.816600    5362 client.go:171] duration metric: took 223.73675ms to LocalClient.Create
	I0813 17:36:52.818933    5362 start.go:128] duration metric: took 2.245292792s to createHost
	I0813 17:36:52.819082    5362 start.go:83] releasing machines lock for "flannel-986000", held for 2.245554541s
	W0813 17:36:52.819125    5362 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:36:52.829271    5362 out.go:177] * Deleting "flannel-986000" in qemu2 ...
	W0813 17:36:52.863897    5362 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:36:52.863932    5362 start.go:729] Will try again in 5 seconds ...
	I0813 17:36:57.866069    5362 start.go:360] acquireMachinesLock for flannel-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:36:57.866546    5362 start.go:364] duration metric: took 375.667µs to acquireMachinesLock for "flannel-986000"
	I0813 17:36:57.866651    5362 start.go:93] Provisioning new machine with config: &{Name:flannel-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:36:57.866978    5362 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:36:57.877585    5362 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:36:57.926735    5362 start.go:159] libmachine.API.Create for "flannel-986000" (driver="qemu2")
	I0813 17:36:57.926787    5362 client.go:168] LocalClient.Create starting
	I0813 17:36:57.926926    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:36:57.926991    5362 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:57.927011    5362 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:57.927082    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:36:57.927128    5362 main.go:141] libmachine: Decoding PEM data...
	I0813 17:36:57.927146    5362 main.go:141] libmachine: Parsing certificate...
	I0813 17:36:57.927650    5362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:36:58.091692    5362 main.go:141] libmachine: Creating SSH key...
	I0813 17:36:58.130955    5362 main.go:141] libmachine: Creating Disk image...
	I0813 17:36:58.130960    5362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:36:58.131141    5362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2
	I0813 17:36:58.140837    5362 main.go:141] libmachine: STDOUT: 
	I0813 17:36:58.140857    5362 main.go:141] libmachine: STDERR: 
	I0813 17:36:58.140907    5362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2 +20000M
	I0813 17:36:58.148993    5362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:36:58.149009    5362 main.go:141] libmachine: STDERR: 
	I0813 17:36:58.149019    5362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2
	I0813 17:36:58.149023    5362 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:36:58.149033    5362 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:36:58.149072    5362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:c1:72:44:a9:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/flannel-986000/disk.qcow2
	I0813 17:36:58.150666    5362 main.go:141] libmachine: STDOUT: 
	I0813 17:36:58.150687    5362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:36:58.150703    5362 client.go:171] duration metric: took 223.914584ms to LocalClient.Create
	I0813 17:37:00.152919    5362 start.go:128] duration metric: took 2.28594075s to createHost
	I0813 17:37:00.153001    5362 start.go:83] releasing machines lock for "flannel-986000", held for 2.286469834s
	W0813 17:37:00.153343    5362 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:00.168001    5362 out.go:177] 
	W0813 17:37:00.173156    5362 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:37:00.173206    5362 out.go:239] * 
	* 
	W0813 17:37:00.176755    5362 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:37:00.185989    5362 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.858394834s)

                                                
                                                
-- stdout --
	* [enable-default-cni-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-986000" primary control-plane node in "enable-default-cni-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:37:02.559316    5490 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:37:02.559444    5490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:37:02.559447    5490 out.go:304] Setting ErrFile to fd 2...
	I0813 17:37:02.559449    5490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:37:02.559572    5490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:37:02.560810    5490 out.go:298] Setting JSON to false
	I0813 17:37:02.577218    5490 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3986,"bootTime":1723591836,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:37:02.577282    5490 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:37:02.581457    5490 out.go:177] * [enable-default-cni-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:37:02.588437    5490 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:37:02.588475    5490 notify.go:220] Checking for updates...
	I0813 17:37:02.594368    5490 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:37:02.597427    5490 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:37:02.598753    5490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:37:02.601338    5490 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:37:02.604393    5490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:37:02.607705    5490 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:37:02.607776    5490 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:37:02.607831    5490 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:37:02.612372    5490 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:37:02.619410    5490 start.go:297] selected driver: qemu2
	I0813 17:37:02.619416    5490 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:37:02.619422    5490 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:37:02.621587    5490 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:37:02.624316    5490 out.go:177] * Automatically selected the socket_vmnet network
	E0813 17:37:02.627439    5490 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0813 17:37:02.627451    5490 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:37:02.627483    5490 cni.go:84] Creating CNI manager for "bridge"
	I0813 17:37:02.627486    5490 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:37:02.627515    5490 start.go:340] cluster config:
	{Name:enable-default-cni-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:37:02.631029    5490 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:37:02.639364    5490 out.go:177] * Starting "enable-default-cni-986000" primary control-plane node in "enable-default-cni-986000" cluster
	I0813 17:37:02.643384    5490 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:37:02.643402    5490 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:37:02.643410    5490 cache.go:56] Caching tarball of preloaded images
	I0813 17:37:02.643459    5490 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:37:02.643464    5490 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:37:02.643517    5490 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/enable-default-cni-986000/config.json ...
	I0813 17:37:02.643527    5490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/enable-default-cni-986000/config.json: {Name:mkf26ba2fcf708e0c84aa6b2e59813d40b7fdf41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:37:02.643782    5490 start.go:360] acquireMachinesLock for enable-default-cni-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:37:02.643812    5490 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "enable-default-cni-986000"
	I0813 17:37:02.643823    5490 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:37:02.643852    5490 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:37:02.652403    5490 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:37:02.667513    5490 start.go:159] libmachine.API.Create for "enable-default-cni-986000" (driver="qemu2")
	I0813 17:37:02.667534    5490 client.go:168] LocalClient.Create starting
	I0813 17:37:02.667603    5490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:37:02.667634    5490 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:02.667643    5490 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:02.667676    5490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:37:02.667702    5490 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:02.667709    5490 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:02.668101    5490 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:37:02.808503    5490 main.go:141] libmachine: Creating SSH key...
	I0813 17:37:02.944009    5490 main.go:141] libmachine: Creating Disk image...
	I0813 17:37:02.944018    5490 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:37:02.944230    5490 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2
	I0813 17:37:02.953840    5490 main.go:141] libmachine: STDOUT: 
	I0813 17:37:02.953862    5490 main.go:141] libmachine: STDERR: 
	I0813 17:37:02.953923    5490 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2 +20000M
	I0813 17:37:02.961954    5490 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:37:02.961972    5490 main.go:141] libmachine: STDERR: 
	I0813 17:37:02.961986    5490 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2
	I0813 17:37:02.961990    5490 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:37:02.962010    5490 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:37:02.962037    5490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:25:55:37:e9:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2
	I0813 17:37:02.963672    5490 main.go:141] libmachine: STDOUT: 
	I0813 17:37:02.963689    5490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:37:02.963708    5490 client.go:171] duration metric: took 296.174791ms to LocalClient.Create
	I0813 17:37:04.965980    5490 start.go:128] duration metric: took 2.322136958s to createHost
	I0813 17:37:04.966059    5490 start.go:83] releasing machines lock for "enable-default-cni-986000", held for 2.322276458s
	W0813 17:37:04.966115    5490 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:04.978350    5490 out.go:177] * Deleting "enable-default-cni-986000" in qemu2 ...
	W0813 17:37:05.007125    5490 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:05.007153    5490 start.go:729] Will try again in 5 seconds ...
	I0813 17:37:10.009247    5490 start.go:360] acquireMachinesLock for enable-default-cni-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:37:10.009738    5490 start.go:364] duration metric: took 365.75µs to acquireMachinesLock for "enable-default-cni-986000"
	I0813 17:37:10.009843    5490 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:37:10.010074    5490 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:37:10.020557    5490 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:37:10.068110    5490 start.go:159] libmachine.API.Create for "enable-default-cni-986000" (driver="qemu2")
	I0813 17:37:10.068169    5490 client.go:168] LocalClient.Create starting
	I0813 17:37:10.068314    5490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:37:10.068378    5490 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:10.068396    5490 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:10.068455    5490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:37:10.068501    5490 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:10.068516    5490 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:10.069498    5490 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:37:10.222734    5490 main.go:141] libmachine: Creating SSH key...
	I0813 17:37:10.321298    5490 main.go:141] libmachine: Creating Disk image...
	I0813 17:37:10.321305    5490 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:37:10.321493    5490 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2
	I0813 17:37:10.330606    5490 main.go:141] libmachine: STDOUT: 
	I0813 17:37:10.330624    5490 main.go:141] libmachine: STDERR: 
	I0813 17:37:10.330685    5490 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2 +20000M
	I0813 17:37:10.338608    5490 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:37:10.338624    5490 main.go:141] libmachine: STDERR: 
	I0813 17:37:10.338636    5490 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2
	I0813 17:37:10.338646    5490 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:37:10.338656    5490 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:37:10.338687    5490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:31:41:11:c8:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/enable-default-cni-986000/disk.qcow2
	I0813 17:37:10.340283    5490 main.go:141] libmachine: STDOUT: 
	I0813 17:37:10.340301    5490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:37:10.340314    5490 client.go:171] duration metric: took 272.143916ms to LocalClient.Create
	I0813 17:37:12.342485    5490 start.go:128] duration metric: took 2.332419125s to createHost
	I0813 17:37:12.342563    5490 start.go:83] releasing machines lock for "enable-default-cni-986000", held for 2.33284125s
	W0813 17:37:12.343061    5490 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:12.355694    5490 out.go:177] 
	W0813 17:37:12.359688    5490 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:37:12.359715    5490 out.go:239] * 
	* 
	W0813 17:37:12.363094    5490 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:37:12.375609    5490 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.852191291s)

                                                
                                                
-- stdout --
	* [bridge-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-986000" primary control-plane node in "bridge-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:37:14.541300    5607 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:37:14.541410    5607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:37:14.541413    5607 out.go:304] Setting ErrFile to fd 2...
	I0813 17:37:14.541416    5607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:37:14.541535    5607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:37:14.542705    5607 out.go:298] Setting JSON to false
	I0813 17:37:14.559591    5607 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3998,"bootTime":1723591836,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:37:14.559653    5607 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:37:14.564942    5607 out.go:177] * [bridge-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:37:14.573072    5607 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:37:14.573129    5607 notify.go:220] Checking for updates...
	I0813 17:37:14.580016    5607 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:37:14.583096    5607 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:37:14.587027    5607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:37:14.590053    5607 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:37:14.593085    5607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:37:14.596372    5607 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:37:14.596448    5607 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:37:14.596494    5607 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:37:14.600030    5607 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:37:14.607035    5607 start.go:297] selected driver: qemu2
	I0813 17:37:14.607041    5607 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:37:14.607047    5607 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:37:14.609243    5607 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:37:14.613038    5607 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:37:14.616136    5607 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:37:14.616157    5607 cni.go:84] Creating CNI manager for "bridge"
	I0813 17:37:14.616160    5607 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:37:14.616191    5607 start.go:340] cluster config:
	{Name:bridge-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:37:14.619465    5607 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:37:14.624057    5607 out.go:177] * Starting "bridge-986000" primary control-plane node in "bridge-986000" cluster
	I0813 17:37:14.631028    5607 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:37:14.631053    5607 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:37:14.631062    5607 cache.go:56] Caching tarball of preloaded images
	I0813 17:37:14.631140    5607 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:37:14.631146    5607 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:37:14.631202    5607 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/bridge-986000/config.json ...
	I0813 17:37:14.631212    5607 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/bridge-986000/config.json: {Name:mkd237320f680220c6aa8039882b33449ef4259d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:37:14.631445    5607 start.go:360] acquireMachinesLock for bridge-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:37:14.631479    5607 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "bridge-986000"
	I0813 17:37:14.631499    5607 start.go:93] Provisioning new machine with config: &{Name:bridge-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:37:14.631525    5607 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:37:14.639033    5607 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:37:14.654073    5607 start.go:159] libmachine.API.Create for "bridge-986000" (driver="qemu2")
	I0813 17:37:14.654108    5607 client.go:168] LocalClient.Create starting
	I0813 17:37:14.654183    5607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:37:14.654212    5607 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:14.654228    5607 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:14.654265    5607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:37:14.654288    5607 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:14.654299    5607 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:14.654626    5607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:37:14.795809    5607 main.go:141] libmachine: Creating SSH key...
	I0813 17:37:14.859525    5607 main.go:141] libmachine: Creating Disk image...
	I0813 17:37:14.859531    5607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:37:14.859713    5607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2
	I0813 17:37:14.868960    5607 main.go:141] libmachine: STDOUT: 
	I0813 17:37:14.868976    5607 main.go:141] libmachine: STDERR: 
	I0813 17:37:14.869027    5607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2 +20000M
	I0813 17:37:14.877024    5607 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:37:14.877059    5607 main.go:141] libmachine: STDERR: 
	I0813 17:37:14.877075    5607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2
	I0813 17:37:14.877079    5607 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:37:14.877092    5607 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:37:14.877122    5607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:b3:26:ea:f2:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2
	I0813 17:37:14.878746    5607 main.go:141] libmachine: STDOUT: 
	I0813 17:37:14.878762    5607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:37:14.878782    5607 client.go:171] duration metric: took 224.671625ms to LocalClient.Create
	I0813 17:37:16.880969    5607 start.go:128] duration metric: took 2.249449417s to createHost
	I0813 17:37:16.881045    5607 start.go:83] releasing machines lock for "bridge-986000", held for 2.249594208s
	W0813 17:37:16.881125    5607 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:16.898193    5607 out.go:177] * Deleting "bridge-986000" in qemu2 ...
	W0813 17:37:16.927388    5607 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:16.927430    5607 start.go:729] Will try again in 5 seconds ...
	I0813 17:37:21.928421    5607 start.go:360] acquireMachinesLock for bridge-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:37:21.928783    5607 start.go:364] duration metric: took 266.25µs to acquireMachinesLock for "bridge-986000"
	I0813 17:37:21.928847    5607 start.go:93] Provisioning new machine with config: &{Name:bridge-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:37:21.929005    5607 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:37:21.938474    5607 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:37:21.976599    5607 start.go:159] libmachine.API.Create for "bridge-986000" (driver="qemu2")
	I0813 17:37:21.976652    5607 client.go:168] LocalClient.Create starting
	I0813 17:37:21.976797    5607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:37:21.976848    5607 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:21.976863    5607 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:21.976925    5607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:37:21.976964    5607 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:21.976974    5607 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:21.977437    5607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:37:22.126490    5607 main.go:141] libmachine: Creating SSH key...
	I0813 17:37:22.301776    5607 main.go:141] libmachine: Creating Disk image...
	I0813 17:37:22.301787    5607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:37:22.302016    5607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2
	I0813 17:37:22.311481    5607 main.go:141] libmachine: STDOUT: 
	I0813 17:37:22.311498    5607 main.go:141] libmachine: STDERR: 
	I0813 17:37:22.311550    5607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2 +20000M
	I0813 17:37:22.319571    5607 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:37:22.319584    5607 main.go:141] libmachine: STDERR: 
	I0813 17:37:22.319596    5607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2
	I0813 17:37:22.319602    5607 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:37:22.319612    5607 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:37:22.319637    5607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:58:28:2e:dd:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/bridge-986000/disk.qcow2
	I0813 17:37:22.321193    5607 main.go:141] libmachine: STDOUT: 
	I0813 17:37:22.321209    5607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:37:22.321221    5607 client.go:171] duration metric: took 344.568667ms to LocalClient.Create
	I0813 17:37:24.323383    5607 start.go:128] duration metric: took 2.394384042s to createHost
	I0813 17:37:24.323453    5607 start.go:83] releasing machines lock for "bridge-986000", held for 2.39469225s
	W0813 17:37:24.323808    5607 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:24.334503    5607 out.go:177] 
	W0813 17:37:24.339552    5607 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:37:24.339589    5607 out.go:239] * 
	* 
	W0813 17:37:24.342683    5607 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:37:24.352468    5607 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.748603542s)

                                                
                                                
-- stdout --
	* [kubenet-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-986000" primary control-plane node in "kubenet-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:37:26.542823    5724 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:37:26.543164    5724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:37:26.543177    5724 out.go:304] Setting ErrFile to fd 2...
	I0813 17:37:26.543180    5724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:37:26.543411    5724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:37:26.544836    5724 out.go:298] Setting JSON to false
	I0813 17:37:26.561578    5724 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4010,"bootTime":1723591836,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:37:26.561656    5724 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:37:26.567999    5724 out.go:177] * [kubenet-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:37:26.575953    5724 notify.go:220] Checking for updates...
	I0813 17:37:26.579996    5724 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:37:26.583010    5724 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:37:26.585973    5724 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:37:26.589996    5724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:37:26.593013    5724 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:37:26.595970    5724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:37:26.599307    5724 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:37:26.599384    5724 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:37:26.599456    5724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:37:26.603997    5724 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:37:26.611002    5724 start.go:297] selected driver: qemu2
	I0813 17:37:26.611011    5724 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:37:26.611017    5724 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:37:26.613271    5724 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:37:26.617024    5724 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:37:26.621060    5724 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:37:26.621092    5724 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0813 17:37:26.621113    5724 start.go:340] cluster config:
	{Name:kubenet-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:37:26.624762    5724 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:37:26.631966    5724 out.go:177] * Starting "kubenet-986000" primary control-plane node in "kubenet-986000" cluster
	I0813 17:37:26.635991    5724 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:37:26.636011    5724 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:37:26.636017    5724 cache.go:56] Caching tarball of preloaded images
	I0813 17:37:26.636069    5724 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:37:26.636074    5724 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:37:26.636125    5724 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/kubenet-986000/config.json ...
	I0813 17:37:26.636136    5724 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/kubenet-986000/config.json: {Name:mk54d59c4e78516ead69e3c73340d956d6617de7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:37:26.636455    5724 start.go:360] acquireMachinesLock for kubenet-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:37:26.636498    5724 start.go:364] duration metric: took 36.292µs to acquireMachinesLock for "kubenet-986000"
	I0813 17:37:26.636511    5724 start.go:93] Provisioning new machine with config: &{Name:kubenet-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:37:26.636563    5724 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:37:26.644999    5724 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:37:26.660163    5724 start.go:159] libmachine.API.Create for "kubenet-986000" (driver="qemu2")
	I0813 17:37:26.660183    5724 client.go:168] LocalClient.Create starting
	I0813 17:37:26.660251    5724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:37:26.660280    5724 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:26.660289    5724 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:26.660328    5724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:37:26.660352    5724 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:26.660360    5724 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:26.660668    5724 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:37:26.802618    5724 main.go:141] libmachine: Creating SSH key...
	I0813 17:37:26.894073    5724 main.go:141] libmachine: Creating Disk image...
	I0813 17:37:26.894083    5724 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:37:26.894286    5724 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2
	I0813 17:37:26.903585    5724 main.go:141] libmachine: STDOUT: 
	I0813 17:37:26.903604    5724 main.go:141] libmachine: STDERR: 
	I0813 17:37:26.903650    5724 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2 +20000M
	I0813 17:37:26.911671    5724 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:37:26.911687    5724 main.go:141] libmachine: STDERR: 
	I0813 17:37:26.911702    5724 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2
	I0813 17:37:26.911706    5724 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:37:26.911718    5724 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:37:26.911747    5724 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:7d:f1:b3:12:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2
	I0813 17:37:26.913357    5724 main.go:141] libmachine: STDOUT: 
	I0813 17:37:26.913373    5724 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:37:26.913394    5724 client.go:171] duration metric: took 253.210042ms to LocalClient.Create
	I0813 17:37:28.915477    5724 start.go:128] duration metric: took 2.278937792s to createHost
	I0813 17:37:28.915521    5724 start.go:83] releasing machines lock for "kubenet-986000", held for 2.2790535s
	W0813 17:37:28.915566    5724 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:28.921792    5724 out.go:177] * Deleting "kubenet-986000" in qemu2 ...
	W0813 17:37:28.945889    5724 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:28.945907    5724 start.go:729] Will try again in 5 seconds ...
	I0813 17:37:33.948069    5724 start.go:360] acquireMachinesLock for kubenet-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:37:33.948575    5724 start.go:364] duration metric: took 397.042µs to acquireMachinesLock for "kubenet-986000"
	I0813 17:37:33.948689    5724 start.go:93] Provisioning new machine with config: &{Name:kubenet-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:37:33.948893    5724 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:37:33.958664    5724 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:37:34.009727    5724 start.go:159] libmachine.API.Create for "kubenet-986000" (driver="qemu2")
	I0813 17:37:34.009780    5724 client.go:168] LocalClient.Create starting
	I0813 17:37:34.009902    5724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:37:34.009984    5724 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:34.010004    5724 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:34.010070    5724 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:37:34.010115    5724 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:34.010130    5724 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:34.010630    5724 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:37:34.164032    5724 main.go:141] libmachine: Creating SSH key...
	I0813 17:37:34.189121    5724 main.go:141] libmachine: Creating Disk image...
	I0813 17:37:34.189126    5724 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:37:34.189325    5724 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2
	I0813 17:37:34.198827    5724 main.go:141] libmachine: STDOUT: 
	I0813 17:37:34.198855    5724 main.go:141] libmachine: STDERR: 
	I0813 17:37:34.198909    5724 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2 +20000M
	I0813 17:37:34.207003    5724 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:37:34.207022    5724 main.go:141] libmachine: STDERR: 
	I0813 17:37:34.207031    5724 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2
	I0813 17:37:34.207035    5724 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:37:34.207043    5724 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:37:34.207081    5724 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:5f:e3:49:28:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/kubenet-986000/disk.qcow2
	I0813 17:37:34.208667    5724 main.go:141] libmachine: STDOUT: 
	I0813 17:37:34.208684    5724 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:37:34.208698    5724 client.go:171] duration metric: took 198.916084ms to LocalClient.Create
	I0813 17:37:36.210870    5724 start.go:128] duration metric: took 2.261944667s to createHost
	I0813 17:37:36.210941    5724 start.go:83] releasing machines lock for "kubenet-986000", held for 2.262380917s
	W0813 17:37:36.211333    5724 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:36.230048    5724 out.go:177] 
	W0813 17:37:36.233133    5724 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:37:36.233168    5724 out.go:239] * 
	* 
	W0813 17:37:36.235291    5724 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:37:36.250041    5724 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.7935495s)

                                                
                                                
-- stdout --
	* [custom-flannel-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-986000" primary control-plane node in "custom-flannel-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:37:38.437667    5839 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:37:38.437802    5839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:37:38.437806    5839 out.go:304] Setting ErrFile to fd 2...
	I0813 17:37:38.437808    5839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:37:38.437955    5839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:37:38.439002    5839 out.go:298] Setting JSON to false
	I0813 17:37:38.455378    5839 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4022,"bootTime":1723591836,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:37:38.455453    5839 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:37:38.460606    5839 out.go:177] * [custom-flannel-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:37:38.467569    5839 notify.go:220] Checking for updates...
	I0813 17:37:38.471576    5839 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:37:38.475583    5839 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:37:38.479555    5839 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:37:38.483603    5839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:37:38.486599    5839 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:37:38.492491    5839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:37:38.495851    5839 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:37:38.495912    5839 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:37:38.495969    5839 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:37:38.499576    5839 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:37:38.506562    5839 start.go:297] selected driver: qemu2
	I0813 17:37:38.506567    5839 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:37:38.506571    5839 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:37:38.508787    5839 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:37:38.512575    5839 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:37:38.515558    5839 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:37:38.515585    5839 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0813 17:37:38.515593    5839 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0813 17:37:38.515617    5839 start.go:340] cluster config:
	{Name:custom-flannel-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:37:38.519105    5839 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:37:38.526538    5839 out.go:177] * Starting "custom-flannel-986000" primary control-plane node in "custom-flannel-986000" cluster
	I0813 17:37:38.530542    5839 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:37:38.530565    5839 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:37:38.530575    5839 cache.go:56] Caching tarball of preloaded images
	I0813 17:37:38.530635    5839 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:37:38.530640    5839 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:37:38.530699    5839 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/custom-flannel-986000/config.json ...
	I0813 17:37:38.530716    5839 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/custom-flannel-986000/config.json: {Name:mke93a3470afc69f37eb131e7a5f584180a57f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:37:38.530966    5839 start.go:360] acquireMachinesLock for custom-flannel-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:37:38.531003    5839 start.go:364] duration metric: took 28.459µs to acquireMachinesLock for "custom-flannel-986000"
	I0813 17:37:38.531023    5839 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:37:38.531048    5839 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:37:38.535549    5839 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:37:38.551351    5839 start.go:159] libmachine.API.Create for "custom-flannel-986000" (driver="qemu2")
	I0813 17:37:38.551377    5839 client.go:168] LocalClient.Create starting
	I0813 17:37:38.551447    5839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:37:38.551478    5839 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:38.551490    5839 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:38.551530    5839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:37:38.551552    5839 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:38.551559    5839 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:38.551896    5839 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:37:38.695427    5839 main.go:141] libmachine: Creating SSH key...
	I0813 17:37:38.816179    5839 main.go:141] libmachine: Creating Disk image...
	I0813 17:37:38.816189    5839 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:37:38.816374    5839 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2
	I0813 17:37:38.825522    5839 main.go:141] libmachine: STDOUT: 
	I0813 17:37:38.825541    5839 main.go:141] libmachine: STDERR: 
	I0813 17:37:38.825594    5839 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2 +20000M
	I0813 17:37:38.833688    5839 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:37:38.833704    5839 main.go:141] libmachine: STDERR: 
	I0813 17:37:38.833724    5839 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2
	I0813 17:37:38.833729    5839 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:37:38.833746    5839 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:37:38.833778    5839 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:c6:d2:a1:1d:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2
	I0813 17:37:38.835364    5839 main.go:141] libmachine: STDOUT: 
	I0813 17:37:38.835387    5839 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:37:38.835408    5839 client.go:171] duration metric: took 284.029875ms to LocalClient.Create
	I0813 17:37:40.837535    5839 start.go:128] duration metric: took 2.306508667s to createHost
	I0813 17:37:40.837583    5839 start.go:83] releasing machines lock for "custom-flannel-986000", held for 2.306609083s
	W0813 17:37:40.837652    5839 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:40.844985    5839 out.go:177] * Deleting "custom-flannel-986000" in qemu2 ...
	W0813 17:37:40.867406    5839 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:40.867432    5839 start.go:729] Will try again in 5 seconds ...
	I0813 17:37:45.869648    5839 start.go:360] acquireMachinesLock for custom-flannel-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:37:45.870090    5839 start.go:364] duration metric: took 330.167µs to acquireMachinesLock for "custom-flannel-986000"
	I0813 17:37:45.870201    5839 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:37:45.870502    5839 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:37:45.882159    5839 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:37:45.932439    5839 start.go:159] libmachine.API.Create for "custom-flannel-986000" (driver="qemu2")
	I0813 17:37:45.932496    5839 client.go:168] LocalClient.Create starting
	I0813 17:37:45.932624    5839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:37:45.932685    5839 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:45.932716    5839 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:45.932777    5839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:37:45.932824    5839 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:45.932834    5839 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:45.933512    5839 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:37:46.087090    5839 main.go:141] libmachine: Creating SSH key...
	I0813 17:37:46.146285    5839 main.go:141] libmachine: Creating Disk image...
	I0813 17:37:46.146291    5839 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:37:46.146488    5839 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2
	I0813 17:37:46.155529    5839 main.go:141] libmachine: STDOUT: 
	I0813 17:37:46.155557    5839 main.go:141] libmachine: STDERR: 
	I0813 17:37:46.155608    5839 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2 +20000M
	I0813 17:37:46.163636    5839 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:37:46.163657    5839 main.go:141] libmachine: STDERR: 
	I0813 17:37:46.163671    5839 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2
	I0813 17:37:46.163681    5839 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:37:46.163692    5839 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:37:46.163721    5839 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:d7:41:54:8c:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/custom-flannel-986000/disk.qcow2
	I0813 17:37:46.165335    5839 main.go:141] libmachine: STDOUT: 
	I0813 17:37:46.165352    5839 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:37:46.165366    5839 client.go:171] duration metric: took 232.866959ms to LocalClient.Create
	I0813 17:37:48.167450    5839 start.go:128] duration metric: took 2.296954041s to createHost
	I0813 17:37:48.167553    5839 start.go:83] releasing machines lock for "custom-flannel-986000", held for 2.297414s
	W0813 17:37:48.167772    5839 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:48.176547    5839 out.go:177] 
	W0813 17:37:48.181678    5839 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:37:48.181693    5839 out.go:239] * 
	* 
	W0813 17:37:48.183002    5839 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:37:48.194561    5839 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.824776292s)

                                                
                                                
-- stdout --
	* [calico-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-986000" primary control-plane node in "calico-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:37:50.606706    5971 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:37:50.606826    5971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:37:50.606829    5971 out.go:304] Setting ErrFile to fd 2...
	I0813 17:37:50.606832    5971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:37:50.606944    5971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:37:50.608070    5971 out.go:298] Setting JSON to false
	I0813 17:37:50.624083    5971 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4034,"bootTime":1723591836,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:37:50.624149    5971 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:37:50.630116    5971 out.go:177] * [calico-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:37:50.638002    5971 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:37:50.638088    5971 notify.go:220] Checking for updates...
	I0813 17:37:50.647096    5971 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:37:50.650099    5971 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:37:50.653070    5971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:37:50.656117    5971 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:37:50.659067    5971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:37:50.662374    5971 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:37:50.662448    5971 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:37:50.662505    5971 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:37:50.667089    5971 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:37:50.674084    5971 start.go:297] selected driver: qemu2
	I0813 17:37:50.674090    5971 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:37:50.674096    5971 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:37:50.676443    5971 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:37:50.680089    5971 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:37:50.681628    5971 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:37:50.681647    5971 cni.go:84] Creating CNI manager for "calico"
	I0813 17:37:50.681654    5971 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0813 17:37:50.681691    5971 start.go:340] cluster config:
	{Name:calico-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:37:50.685283    5971 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:37:50.693114    5971 out.go:177] * Starting "calico-986000" primary control-plane node in "calico-986000" cluster
	I0813 17:37:50.697098    5971 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:37:50.697111    5971 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:37:50.697118    5971 cache.go:56] Caching tarball of preloaded images
	I0813 17:37:50.697174    5971 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:37:50.697180    5971 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:37:50.697242    5971 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/calico-986000/config.json ...
	I0813 17:37:50.697252    5971 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/calico-986000/config.json: {Name:mkdd4788d9136a7a7c5cd3525abc8b4a78c9e73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:37:50.697474    5971 start.go:360] acquireMachinesLock for calico-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:37:50.697507    5971 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "calico-986000"
	I0813 17:37:50.697519    5971 start.go:93] Provisioning new machine with config: &{Name:calico-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:37:50.697553    5971 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:37:50.705061    5971 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:37:50.721800    5971 start.go:159] libmachine.API.Create for "calico-986000" (driver="qemu2")
	I0813 17:37:50.721831    5971 client.go:168] LocalClient.Create starting
	I0813 17:37:50.721911    5971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:37:50.721942    5971 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:50.721952    5971 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:50.721996    5971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:37:50.722021    5971 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:50.722031    5971 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:50.722362    5971 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:37:50.871266    5971 main.go:141] libmachine: Creating SSH key...
	I0813 17:37:50.924729    5971 main.go:141] libmachine: Creating Disk image...
	I0813 17:37:50.924734    5971 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:37:50.924915    5971 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2
	I0813 17:37:50.934057    5971 main.go:141] libmachine: STDOUT: 
	I0813 17:37:50.934076    5971 main.go:141] libmachine: STDERR: 
	I0813 17:37:50.934120    5971 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2 +20000M
	I0813 17:37:50.942054    5971 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:37:50.942069    5971 main.go:141] libmachine: STDERR: 
	I0813 17:37:50.942088    5971 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2
	I0813 17:37:50.942092    5971 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:37:50.942105    5971 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:37:50.942131    5971 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e3:e4:73:44:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2
	I0813 17:37:50.943707    5971 main.go:141] libmachine: STDOUT: 
	I0813 17:37:50.943721    5971 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:37:50.943743    5971 client.go:171] duration metric: took 221.910375ms to LocalClient.Create
	I0813 17:37:52.946010    5971 start.go:128] duration metric: took 2.24846075s to createHost
	I0813 17:37:52.946100    5971 start.go:83] releasing machines lock for "calico-986000", held for 2.248621167s
	W0813 17:37:52.946145    5971 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:52.955002    5971 out.go:177] * Deleting "calico-986000" in qemu2 ...
	W0813 17:37:52.988098    5971 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:37:52.988137    5971 start.go:729] Will try again in 5 seconds ...
	I0813 17:37:57.990082    5971 start.go:360] acquireMachinesLock for calico-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:37:57.990498    5971 start.go:364] duration metric: took 329.958µs to acquireMachinesLock for "calico-986000"
	I0813 17:37:57.990591    5971 start.go:93] Provisioning new machine with config: &{Name:calico-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:37:57.990827    5971 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:37:58.002823    5971 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:37:58.050737    5971 start.go:159] libmachine.API.Create for "calico-986000" (driver="qemu2")
	I0813 17:37:58.050791    5971 client.go:168] LocalClient.Create starting
	I0813 17:37:58.050949    5971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:37:58.051025    5971 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:58.051043    5971 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:58.051111    5971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:37:58.051161    5971 main.go:141] libmachine: Decoding PEM data...
	I0813 17:37:58.051175    5971 main.go:141] libmachine: Parsing certificate...
	I0813 17:37:58.051835    5971 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:37:58.200173    5971 main.go:141] libmachine: Creating SSH key...
	I0813 17:37:58.339127    5971 main.go:141] libmachine: Creating Disk image...
	I0813 17:37:58.339137    5971 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:37:58.339334    5971 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2
	I0813 17:37:58.348818    5971 main.go:141] libmachine: STDOUT: 
	I0813 17:37:58.348839    5971 main.go:141] libmachine: STDERR: 
	I0813 17:37:58.348886    5971 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2 +20000M
	I0813 17:37:58.357133    5971 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:37:58.357152    5971 main.go:141] libmachine: STDERR: 
	I0813 17:37:58.357164    5971 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2
	I0813 17:37:58.357169    5971 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:37:58.357190    5971 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:37:58.357218    5971 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:15:62:6a:12:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/calico-986000/disk.qcow2
	I0813 17:37:58.358851    5971 main.go:141] libmachine: STDOUT: 
	I0813 17:37:58.358868    5971 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:37:58.358888    5971 client.go:171] duration metric: took 308.093042ms to LocalClient.Create
	I0813 17:38:00.361051    5971 start.go:128] duration metric: took 2.3702245s to createHost
	I0813 17:38:00.361166    5971 start.go:83] releasing machines lock for "calico-986000", held for 2.370683709s
	W0813 17:38:00.361561    5971 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:00.376236    5971 out.go:177] 
	W0813 17:38:00.380862    5971 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:00.380902    5971 out.go:239] * 
	* 
	W0813 17:38:00.383369    5971 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:38:00.392230    5971 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-986000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.778773s)

                                                
                                                
-- stdout --
	* [false-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-986000" primary control-plane node in "false-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:02.751446    6106 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:02.751574    6106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:02.751578    6106 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:02.751580    6106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:02.751696    6106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:02.752767    6106 out.go:298] Setting JSON to false
	I0813 17:38:02.768889    6106 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4046,"bootTime":1723591836,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:38:02.768958    6106 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:38:02.775883    6106 out.go:177] * [false-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:38:02.779891    6106 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:38:02.779986    6106 notify.go:220] Checking for updates...
	I0813 17:38:02.787856    6106 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:38:02.791865    6106 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:38:02.794871    6106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:38:02.797841    6106 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:38:02.800876    6106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:38:02.804204    6106 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:02.804273    6106 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:38:02.804334    6106 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:38:02.808846    6106 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:38:02.815843    6106 start.go:297] selected driver: qemu2
	I0813 17:38:02.815850    6106 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:38:02.815856    6106 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:38:02.818237    6106 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:38:02.822858    6106 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:38:02.825951    6106 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:38:02.825971    6106 cni.go:84] Creating CNI manager for "false"
	I0813 17:38:02.826008    6106 start.go:340] cluster config:
	{Name:false-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:02.829644    6106 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:02.836864    6106 out.go:177] * Starting "false-986000" primary control-plane node in "false-986000" cluster
	I0813 17:38:02.840703    6106 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:38:02.840723    6106 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:38:02.840735    6106 cache.go:56] Caching tarball of preloaded images
	I0813 17:38:02.840796    6106 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:38:02.840801    6106 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:38:02.840860    6106 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/false-986000/config.json ...
	I0813 17:38:02.840871    6106 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/false-986000/config.json: {Name:mk3b4785b014cfee7a450b34d1d96ef3e891673d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:38:02.841137    6106 start.go:360] acquireMachinesLock for false-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:02.841181    6106 start.go:364] duration metric: took 36.166µs to acquireMachinesLock for "false-986000"
	I0813 17:38:02.841195    6106 start.go:93] Provisioning new machine with config: &{Name:false-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:38:02.841231    6106 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:38:02.843901    6106 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:38:02.859819    6106 start.go:159] libmachine.API.Create for "false-986000" (driver="qemu2")
	I0813 17:38:02.859845    6106 client.go:168] LocalClient.Create starting
	I0813 17:38:02.859910    6106 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:38:02.859939    6106 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:02.859951    6106 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:02.859987    6106 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:38:02.860010    6106 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:02.860018    6106 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:02.860350    6106 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:38:03.028070    6106 main.go:141] libmachine: Creating SSH key...
	I0813 17:38:03.137851    6106 main.go:141] libmachine: Creating Disk image...
	I0813 17:38:03.137861    6106 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:38:03.138063    6106 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2
	I0813 17:38:03.147418    6106 main.go:141] libmachine: STDOUT: 
	I0813 17:38:03.147435    6106 main.go:141] libmachine: STDERR: 
	I0813 17:38:03.147494    6106 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2 +20000M
	I0813 17:38:03.155806    6106 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:38:03.155887    6106 main.go:141] libmachine: STDERR: 
	I0813 17:38:03.155900    6106 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2
	I0813 17:38:03.155905    6106 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:38:03.155920    6106 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:03.155944    6106 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:d5:3e:46:65:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2
	I0813 17:38:03.157551    6106 main.go:141] libmachine: STDOUT: 
	I0813 17:38:03.157567    6106 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:03.157588    6106 client.go:171] duration metric: took 297.741708ms to LocalClient.Create
	I0813 17:38:05.159634    6106 start.go:128] duration metric: took 2.31843625s to createHost
	I0813 17:38:05.159643    6106 start.go:83] releasing machines lock for "false-986000", held for 2.318494917s
	W0813 17:38:05.159659    6106 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:05.163360    6106 out.go:177] * Deleting "false-986000" in qemu2 ...
	W0813 17:38:05.179636    6106 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:05.179647    6106 start.go:729] Will try again in 5 seconds ...
	I0813 17:38:10.181736    6106 start.go:360] acquireMachinesLock for false-986000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:10.182143    6106 start.go:364] duration metric: took 297.25µs to acquireMachinesLock for "false-986000"
	I0813 17:38:10.182250    6106 start.go:93] Provisioning new machine with config: &{Name:false-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:38:10.182446    6106 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:38:10.196796    6106 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0813 17:38:10.229219    6106 start.go:159] libmachine.API.Create for "false-986000" (driver="qemu2")
	I0813 17:38:10.229270    6106 client.go:168] LocalClient.Create starting
	I0813 17:38:10.229386    6106 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:38:10.229443    6106 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:10.229456    6106 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:10.229505    6106 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:38:10.229543    6106 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:10.229551    6106 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:10.229972    6106 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:38:10.374153    6106 main.go:141] libmachine: Creating SSH key...
	I0813 17:38:10.440289    6106 main.go:141] libmachine: Creating Disk image...
	I0813 17:38:10.440295    6106 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:38:10.440727    6106 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2
	I0813 17:38:10.449814    6106 main.go:141] libmachine: STDOUT: 
	I0813 17:38:10.449835    6106 main.go:141] libmachine: STDERR: 
	I0813 17:38:10.449894    6106 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2 +20000M
	I0813 17:38:10.457796    6106 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:38:10.457810    6106 main.go:141] libmachine: STDERR: 
	I0813 17:38:10.457822    6106 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2
	I0813 17:38:10.457827    6106 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:38:10.457841    6106 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:10.457872    6106 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:f8:46:d3:23:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/false-986000/disk.qcow2
	I0813 17:38:10.459468    6106 main.go:141] libmachine: STDOUT: 
	I0813 17:38:10.459486    6106 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:10.459498    6106 client.go:171] duration metric: took 230.223833ms to LocalClient.Create
	I0813 17:38:12.461724    6106 start.go:128] duration metric: took 2.279253625s to createHost
	I0813 17:38:12.461805    6106 start.go:83] releasing machines lock for "false-986000", held for 2.279653791s
	W0813 17:38:12.462207    6106 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:12.471859    6106 out.go:177] 
	W0813 17:38:12.477880    6106 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:12.477919    6106 out.go:239] * 
	* 
	W0813 17:38:12.480790    6106 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:38:12.488799    6106 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.763920292s)

                                                
                                                
-- stdout --
	* [old-k8s-version-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-971000" primary control-plane node in "old-k8s-version-971000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-971000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:14.639761    6223 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:14.639887    6223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:14.639889    6223 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:14.639892    6223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:14.640025    6223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:14.641129    6223 out.go:298] Setting JSON to false
	I0813 17:38:14.657854    6223 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4058,"bootTime":1723591836,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:38:14.657917    6223 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:38:14.664197    6223 out.go:177] * [old-k8s-version-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:38:14.672036    6223 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:38:14.672116    6223 notify.go:220] Checking for updates...
	I0813 17:38:14.678983    6223 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:38:14.682007    6223 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:38:14.686064    6223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:38:14.688996    6223 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:38:14.692027    6223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:38:14.695357    6223 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:14.695423    6223 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:38:14.695474    6223 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:38:14.698967    6223 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:38:14.706039    6223 start.go:297] selected driver: qemu2
	I0813 17:38:14.706044    6223 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:38:14.706050    6223 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:38:14.708284    6223 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:38:14.712008    6223 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:38:14.715099    6223 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:38:14.715123    6223 cni.go:84] Creating CNI manager for ""
	I0813 17:38:14.715130    6223 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0813 17:38:14.715168    6223 start.go:340] cluster config:
	{Name:old-k8s-version-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:14.718586    6223 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:14.727036    6223 out.go:177] * Starting "old-k8s-version-971000" primary control-plane node in "old-k8s-version-971000" cluster
	I0813 17:38:14.731034    6223 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0813 17:38:14.731055    6223 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0813 17:38:14.731063    6223 cache.go:56] Caching tarball of preloaded images
	I0813 17:38:14.731125    6223 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:38:14.731130    6223 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0813 17:38:14.731185    6223 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/old-k8s-version-971000/config.json ...
	I0813 17:38:14.731196    6223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/old-k8s-version-971000/config.json: {Name:mkfad6454c5574575f21c1c42535ead2942c5bd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:38:14.731531    6223 start.go:360] acquireMachinesLock for old-k8s-version-971000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:14.731564    6223 start.go:364] duration metric: took 25µs to acquireMachinesLock for "old-k8s-version-971000"
	I0813 17:38:14.731576    6223 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:38:14.731599    6223 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:38:14.740056    6223 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:38:14.755162    6223 start.go:159] libmachine.API.Create for "old-k8s-version-971000" (driver="qemu2")
	I0813 17:38:14.755182    6223 client.go:168] LocalClient.Create starting
	I0813 17:38:14.755248    6223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:38:14.755278    6223 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:14.755288    6223 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:14.755325    6223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:38:14.755347    6223 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:14.755357    6223 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:14.755670    6223 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:38:14.895757    6223 main.go:141] libmachine: Creating SSH key...
	I0813 17:38:15.025352    6223 main.go:141] libmachine: Creating Disk image...
	I0813 17:38:15.025360    6223 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:38:15.025546    6223 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0813 17:38:15.035302    6223 main.go:141] libmachine: STDOUT: 
	I0813 17:38:15.035321    6223 main.go:141] libmachine: STDERR: 
	I0813 17:38:15.035378    6223 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2 +20000M
	I0813 17:38:15.044029    6223 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:38:15.044059    6223 main.go:141] libmachine: STDERR: 
	I0813 17:38:15.044078    6223 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0813 17:38:15.044083    6223 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:38:15.044091    6223 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:15.044120    6223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:f3:ec:8f:ae:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0813 17:38:15.045792    6223 main.go:141] libmachine: STDOUT: 
	I0813 17:38:15.045809    6223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:15.045830    6223 client.go:171] duration metric: took 290.647416ms to LocalClient.Create
	I0813 17:38:17.047908    6223 start.go:128] duration metric: took 2.316335875s to createHost
	I0813 17:38:17.047938    6223 start.go:83] releasing machines lock for "old-k8s-version-971000", held for 2.316407792s
	W0813 17:38:17.047990    6223 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:17.052961    6223 out.go:177] * Deleting "old-k8s-version-971000" in qemu2 ...
	W0813 17:38:17.081404    6223 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:17.081417    6223 start.go:729] Will try again in 5 seconds ...
	I0813 17:38:22.083421    6223 start.go:360] acquireMachinesLock for old-k8s-version-971000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:22.083525    6223 start.go:364] duration metric: took 84.167µs to acquireMachinesLock for "old-k8s-version-971000"
	I0813 17:38:22.083553    6223 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:38:22.083613    6223 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:38:22.093871    6223 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:38:22.110056    6223 start.go:159] libmachine.API.Create for "old-k8s-version-971000" (driver="qemu2")
	I0813 17:38:22.110079    6223 client.go:168] LocalClient.Create starting
	I0813 17:38:22.110150    6223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:38:22.110183    6223 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:22.110194    6223 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:22.110225    6223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:38:22.110248    6223 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:22.110258    6223 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:22.110623    6223 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:38:22.252845    6223 main.go:141] libmachine: Creating SSH key...
	I0813 17:38:22.316969    6223 main.go:141] libmachine: Creating Disk image...
	I0813 17:38:22.316977    6223 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:38:22.317164    6223 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0813 17:38:22.326464    6223 main.go:141] libmachine: STDOUT: 
	I0813 17:38:22.326484    6223 main.go:141] libmachine: STDERR: 
	I0813 17:38:22.326541    6223 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2 +20000M
	I0813 17:38:22.334763    6223 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:38:22.334782    6223 main.go:141] libmachine: STDERR: 
	I0813 17:38:22.334795    6223 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0813 17:38:22.334799    6223 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:38:22.334810    6223 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:22.334840    6223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b0:ba:b6:31:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0813 17:38:22.336500    6223 main.go:141] libmachine: STDOUT: 
	I0813 17:38:22.336517    6223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:22.336529    6223 client.go:171] duration metric: took 226.451125ms to LocalClient.Create
	I0813 17:38:24.338606    6223 start.go:128] duration metric: took 2.255018s to createHost
	I0813 17:38:24.338641    6223 start.go:83] releasing machines lock for "old-k8s-version-971000", held for 2.255147292s
	W0813 17:38:24.338885    6223 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:24.348509    6223 out.go:177] 
	W0813 17:38:24.354368    6223 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:24.354381    6223 out.go:239] * 
	* 
	W0813 17:38:24.355450    6223 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:38:24.367396    6223 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (51.687583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-971000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-971000 create -f testdata/busybox.yaml: exit status 1 (29.08125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-971000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-971000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (30.253708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (29.155084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-971000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-971000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-971000 describe deploy/metrics-server -n kube-system: exit status 1 (26.583459ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-971000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-971000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (29.160167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.187407292s)

                                                
                                                
-- stdout --
	* [old-k8s-version-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-971000" primary control-plane node in "old-k8s-version-971000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:28.616166    6291 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:28.616301    6291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:28.616306    6291 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:28.616309    6291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:28.616444    6291 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:28.617494    6291 out.go:298] Setting JSON to false
	I0813 17:38:28.633642    6291 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4072,"bootTime":1723591836,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:38:28.633707    6291 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:38:28.638402    6291 out.go:177] * [old-k8s-version-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:38:28.645423    6291 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:38:28.645483    6291 notify.go:220] Checking for updates...
	I0813 17:38:28.654331    6291 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:38:28.661376    6291 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:38:28.662704    6291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:38:28.665338    6291 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:38:28.668333    6291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:38:28.671641    6291 config.go:182] Loaded profile config "old-k8s-version-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0813 17:38:28.674369    6291 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0813 17:38:28.677376    6291 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:38:28.681378    6291 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:38:28.687377    6291 start.go:297] selected driver: qemu2
	I0813 17:38:28.687381    6291 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:28.687437    6291 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:38:28.689753    6291 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:38:28.689804    6291 cni.go:84] Creating CNI manager for ""
	I0813 17:38:28.689812    6291 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0813 17:38:28.689840    6291 start.go:340] cluster config:
	{Name:old-k8s-version-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-971000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:28.693280    6291 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:28.700351    6291 out.go:177] * Starting "old-k8s-version-971000" primary control-plane node in "old-k8s-version-971000" cluster
	I0813 17:38:28.704214    6291 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0813 17:38:28.704232    6291 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0813 17:38:28.704240    6291 cache.go:56] Caching tarball of preloaded images
	I0813 17:38:28.704296    6291 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:38:28.704301    6291 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0813 17:38:28.704354    6291 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/old-k8s-version-971000/config.json ...
	I0813 17:38:28.704775    6291 start.go:360] acquireMachinesLock for old-k8s-version-971000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:28.704804    6291 start.go:364] duration metric: took 22.916µs to acquireMachinesLock for "old-k8s-version-971000"
	I0813 17:38:28.704816    6291 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:38:28.704822    6291 fix.go:54] fixHost starting: 
	I0813 17:38:28.704939    6291 fix.go:112] recreateIfNeeded on old-k8s-version-971000: state=Stopped err=<nil>
	W0813 17:38:28.704949    6291 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:38:28.709383    6291 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-971000" ...
	I0813 17:38:28.717320    6291 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:28.717355    6291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b0:ba:b6:31:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0813 17:38:28.719290    6291 main.go:141] libmachine: STDOUT: 
	I0813 17:38:28.719310    6291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:28.719339    6291 fix.go:56] duration metric: took 14.516583ms for fixHost
	I0813 17:38:28.719346    6291 start.go:83] releasing machines lock for "old-k8s-version-971000", held for 14.534917ms
	W0813 17:38:28.719351    6291 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:28.719377    6291 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:28.719382    6291 start.go:729] Will try again in 5 seconds ...
	I0813 17:38:33.721518    6291 start.go:360] acquireMachinesLock for old-k8s-version-971000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:33.721945    6291 start.go:364] duration metric: took 313.208µs to acquireMachinesLock for "old-k8s-version-971000"
	I0813 17:38:33.722036    6291 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:38:33.722052    6291 fix.go:54] fixHost starting: 
	I0813 17:38:33.722871    6291 fix.go:112] recreateIfNeeded on old-k8s-version-971000: state=Stopped err=<nil>
	W0813 17:38:33.722906    6291 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:38:33.732398    6291 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-971000" ...
	I0813 17:38:33.736367    6291 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:33.736590    6291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b0:ba:b6:31:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/old-k8s-version-971000/disk.qcow2
	I0813 17:38:33.746876    6291 main.go:141] libmachine: STDOUT: 
	I0813 17:38:33.747002    6291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:33.747101    6291 fix.go:56] duration metric: took 25.048917ms for fixHost
	I0813 17:38:33.747122    6291 start.go:83] releasing machines lock for "old-k8s-version-971000", held for 25.152792ms
	W0813 17:38:33.747395    6291 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:33.753365    6291 out.go:177] 
	W0813 17:38:33.757490    6291 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:33.757519    6291 out.go:239] * 
	* 
	W0813 17:38:33.758969    6291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:38:33.766411    6291 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-971000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (45.992833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-971000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (29.625667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-971000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-971000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-971000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.9035ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-971000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-971000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (28.63725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-971000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (28.64675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-971000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-971000 --alsologtostderr -v=1: exit status 83 (39.142541ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-971000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:34.004419    6317 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:34.005282    6317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:34.005286    6317 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:34.005288    6317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:34.005415    6317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:34.005593    6317 out.go:298] Setting JSON to false
	I0813 17:38:34.005601    6317 mustload.go:65] Loading cluster: old-k8s-version-971000
	I0813 17:38:34.005782    6317 config.go:182] Loaded profile config "old-k8s-version-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0813 17:38:34.008623    6317 out.go:177] * The control-plane node old-k8s-version-971000 host is not running: state=Stopped
	I0813 17:38:34.011569    6317 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-971000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-971000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (28.139792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (28.116667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-216000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-216000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.82060125s)

                                                
                                                
-- stdout --
	* [no-preload-216000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-216000" primary control-plane node in "no-preload-216000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-216000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:34.311223    6334 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:34.311333    6334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:34.311336    6334 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:34.311339    6334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:34.311464    6334 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:34.312518    6334 out.go:298] Setting JSON to false
	I0813 17:38:34.328978    6334 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4078,"bootTime":1723591836,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:38:34.329067    6334 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:38:34.333223    6334 out.go:177] * [no-preload-216000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:38:34.337899    6334 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:38:34.337963    6334 notify.go:220] Checking for updates...
	I0813 17:38:34.344246    6334 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:38:34.349220    6334 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:38:34.353256    6334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:38:34.356186    6334 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:38:34.360223    6334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:38:34.364575    6334 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:34.364657    6334 config.go:182] Loaded profile config "stopped-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0813 17:38:34.364701    6334 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:38:34.369213    6334 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:38:34.376223    6334 start.go:297] selected driver: qemu2
	I0813 17:38:34.376229    6334 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:38:34.376235    6334 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:38:34.378559    6334 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:38:34.381222    6334 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:38:34.384294    6334 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:38:34.384333    6334 cni.go:84] Creating CNI manager for ""
	I0813 17:38:34.384341    6334 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:38:34.384348    6334 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:38:34.384376    6334 start.go:340] cluster config:
	{Name:no-preload-216000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-216000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:34.388242    6334 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:34.396210    6334 out.go:177] * Starting "no-preload-216000" primary control-plane node in "no-preload-216000" cluster
	I0813 17:38:34.400238    6334 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:38:34.400308    6334 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/no-preload-216000/config.json ...
	I0813 17:38:34.400331    6334 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/no-preload-216000/config.json: {Name:mkbfc35b0c47b1820766da9c48d880036062f3d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:38:34.400334    6334 cache.go:107] acquiring lock: {Name:mke14a3dc3194db543c276212c81745047c71d9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:34.400368    6334 cache.go:107] acquiring lock: {Name:mk70a1bf4c201720c543f0b61415fa6826588f63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:34.400394    6334 cache.go:115] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 17:38:34.400391    6334 cache.go:107] acquiring lock: {Name:mka5c282cb2fd549abad1dd055e7de80a0d0f42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:34.400419    6334 cache.go:107] acquiring lock: {Name:mk9095444d79c2a6f00b4b011d7f024cb4fe180f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:34.400463    6334 cache.go:107] acquiring lock: {Name:mk8aaee9748bf2bed30221ac00fdfb8c50ae80bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:34.400402    6334 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.417µs
	I0813 17:38:34.400542    6334 start.go:360] acquireMachinesLock for no-preload-216000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:34.400542    6334 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 17:38:34.400490    6334 cache.go:107] acquiring lock: {Name:mka9f418446ee3a4dc68a5aadeec40ab9ef6d162 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:34.400557    6334 cache.go:107] acquiring lock: {Name:mk3da99cd1ea00d6fccd6f5bbe1d9d14f5d81c50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:34.400583    6334 start.go:364] duration metric: took 34.542µs to acquireMachinesLock for "no-preload-216000"
	I0813 17:38:34.400600    6334 cache.go:107] acquiring lock: {Name:mkfe95fbb4d8ba591410cedfc0f07831760c32cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:34.400594    6334 start.go:93] Provisioning new machine with config: &{Name:no-preload-216000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-216000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:38:34.400620    6334 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:38:34.400681    6334 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0813 17:38:34.400689    6334 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0813 17:38:34.400721    6334 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0813 17:38:34.400724    6334 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0813 17:38:34.400838    6334 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0813 17:38:34.400724    6334 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0813 17:38:34.400786    6334 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0813 17:38:34.405285    6334 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:38:34.411441    6334 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0813 17:38:34.411999    6334 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0813 17:38:34.412345    6334 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0813 17:38:34.414048    6334 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0813 17:38:34.415028    6334 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0813 17:38:34.415104    6334 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0813 17:38:34.415145    6334 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0813 17:38:34.421217    6334 start.go:159] libmachine.API.Create for "no-preload-216000" (driver="qemu2")
	I0813 17:38:34.421245    6334 client.go:168] LocalClient.Create starting
	I0813 17:38:34.421308    6334 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:38:34.421346    6334 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:34.421355    6334 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:34.421391    6334 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:38:34.421413    6334 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:34.421424    6334 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:34.421789    6334 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:38:34.573989    6334 main.go:141] libmachine: Creating SSH key...
	I0813 17:38:34.712262    6334 main.go:141] libmachine: Creating Disk image...
	I0813 17:38:34.712280    6334 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:38:34.712469    6334 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2
	I0813 17:38:34.721746    6334 main.go:141] libmachine: STDOUT: 
	I0813 17:38:34.721772    6334 main.go:141] libmachine: STDERR: 
	I0813 17:38:34.721816    6334 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2 +20000M
	I0813 17:38:34.729997    6334 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:38:34.730014    6334 main.go:141] libmachine: STDERR: 
	I0813 17:38:34.730025    6334 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2
	I0813 17:38:34.730029    6334 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:38:34.730043    6334 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:34.730067    6334 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:58:bd:f3:ee:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2
	I0813 17:38:34.731868    6334 main.go:141] libmachine: STDOUT: 
	I0813 17:38:34.731891    6334 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:34.731909    6334 client.go:171] duration metric: took 310.662458ms to LocalClient.Create
	I0813 17:38:34.829696    6334 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0813 17:38:34.837011    6334 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0813 17:38:34.849049    6334 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0813 17:38:34.862695    6334 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0813 17:38:34.872620    6334 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0813 17:38:34.903642    6334 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0813 17:38:34.936433    6334 cache.go:162] opening:  /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0813 17:38:34.989424    6334 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0813 17:38:34.989440    6334 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 588.864917ms
	I0813 17:38:34.989453    6334 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0813 17:38:36.732141    6334 start.go:128] duration metric: took 2.331546542s to createHost
	I0813 17:38:36.732172    6334 start.go:83] releasing machines lock for "no-preload-216000", held for 2.331623625s
	W0813 17:38:36.732185    6334 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:36.747195    6334 out.go:177] * Deleting "no-preload-216000" in qemu2 ...
	W0813 17:38:36.766120    6334 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:36.766129    6334 start.go:729] Will try again in 5 seconds ...
	I0813 17:38:37.359588    6334 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0813 17:38:37.359609    6334 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.959101125s
	I0813 17:38:37.359621    6334 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0813 17:38:38.230250    6334 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0813 17:38:38.230265    6334 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.829991208s
	I0813 17:38:38.230275    6334 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0813 17:38:39.630506    6334 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0813 17:38:39.630560    6334 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 5.230223792s
	I0813 17:38:39.630588    6334 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0813 17:38:40.316013    6334 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0813 17:38:40.316071    6334 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 5.915721083s
	I0813 17:38:40.316098    6334 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0813 17:38:40.612009    6334 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0813 17:38:40.612058    6334 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 6.211771666s
	I0813 17:38:40.612086    6334 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0813 17:38:41.766228    6334 start.go:360] acquireMachinesLock for no-preload-216000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:41.766591    6334 start.go:364] duration metric: took 294µs to acquireMachinesLock for "no-preload-216000"
	I0813 17:38:41.766700    6334 start.go:93] Provisioning new machine with config: &{Name:no-preload-216000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-216000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:38:41.766983    6334 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:38:41.778657    6334 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:38:41.830298    6334 start.go:159] libmachine.API.Create for "no-preload-216000" (driver="qemu2")
	I0813 17:38:41.830342    6334 client.go:168] LocalClient.Create starting
	I0813 17:38:41.830468    6334 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:38:41.830534    6334 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:41.830566    6334 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:41.830655    6334 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:38:41.830700    6334 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:41.830711    6334 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:41.831159    6334 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:38:41.980497    6334 main.go:141] libmachine: Creating SSH key...
	I0813 17:38:42.048065    6334 main.go:141] libmachine: Creating Disk image...
	I0813 17:38:42.048071    6334 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:38:42.048265    6334 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2
	I0813 17:38:42.057392    6334 main.go:141] libmachine: STDOUT: 
	I0813 17:38:42.057423    6334 main.go:141] libmachine: STDERR: 
	I0813 17:38:42.057480    6334 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2 +20000M
	I0813 17:38:42.065653    6334 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:38:42.065670    6334 main.go:141] libmachine: STDERR: 
	I0813 17:38:42.065683    6334 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2
	I0813 17:38:42.065688    6334 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:38:42.065700    6334 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:42.065736    6334 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:75:88:5b:51:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2
	I0813 17:38:42.067442    6334 main.go:141] libmachine: STDOUT: 
	I0813 17:38:42.067462    6334 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:42.067475    6334 client.go:171] duration metric: took 237.130917ms to LocalClient.Create
	I0813 17:38:42.790108    6334 cache.go:157] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0813 17:38:42.790172    6334 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.389835417s
	I0813 17:38:42.790199    6334 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0813 17:38:42.790245    6334 cache.go:87] Successfully saved all images to host disk.
	I0813 17:38:44.068586    6334 start.go:128] duration metric: took 2.30155725s to createHost
	I0813 17:38:44.068661    6334 start.go:83] releasing machines lock for "no-preload-216000", held for 2.302084083s
	W0813 17:38:44.069046    6334 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-216000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-216000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:44.077682    6334 out.go:177] 
	W0813 17:38:44.080751    6334 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:44.080779    6334 out.go:239] * 
	* 
	W0813 17:38:44.082934    6334 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:38:44.091631    6334 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-216000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (65.172417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-216000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.8629565s)

                                                
                                                
-- stdout --
	* [embed-certs-918000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-918000" primary control-plane node in "embed-certs-918000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-918000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:37.716857    6375 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:37.716991    6375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:37.716995    6375 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:37.716996    6375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:37.717139    6375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:37.718193    6375 out.go:298] Setting JSON to false
	I0813 17:38:37.735177    6375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4081,"bootTime":1723591836,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:38:37.735240    6375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:38:37.739106    6375 out.go:177] * [embed-certs-918000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:38:37.745094    6375 notify.go:220] Checking for updates...
	I0813 17:38:37.748941    6375 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:38:37.753864    6375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:38:37.756948    6375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:38:37.763887    6375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:38:37.770985    6375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:38:37.777955    6375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:38:37.782304    6375 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:37.782379    6375 config.go:182] Loaded profile config "no-preload-216000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:37.782430    6375 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:38:37.785966    6375 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:38:37.792981    6375 start.go:297] selected driver: qemu2
	I0813 17:38:37.792985    6375 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:38:37.792991    6375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:38:37.795446    6375 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:38:37.799960    6375 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:38:37.803390    6375 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:38:37.803411    6375 cni.go:84] Creating CNI manager for ""
	I0813 17:38:37.803419    6375 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:38:37.803423    6375 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:38:37.803455    6375 start.go:340] cluster config:
	{Name:embed-certs-918000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:37.807498    6375 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:37.815987    6375 out.go:177] * Starting "embed-certs-918000" primary control-plane node in "embed-certs-918000" cluster
	I0813 17:38:37.817333    6375 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:38:37.817355    6375 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:38:37.817362    6375 cache.go:56] Caching tarball of preloaded images
	I0813 17:38:37.817426    6375 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:38:37.817432    6375 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:38:37.817487    6375 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/embed-certs-918000/config.json ...
	I0813 17:38:37.817499    6375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/embed-certs-918000/config.json: {Name:mk847b53214fd42b71d50ae82afd805dcaad143c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:38:37.817703    6375 start.go:360] acquireMachinesLock for embed-certs-918000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:37.817739    6375 start.go:364] duration metric: took 29.917µs to acquireMachinesLock for "embed-certs-918000"
	I0813 17:38:37.817753    6375 start.go:93] Provisioning new machine with config: &{Name:embed-certs-918000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:38:37.817782    6375 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:38:37.824935    6375 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:38:37.842379    6375 start.go:159] libmachine.API.Create for "embed-certs-918000" (driver="qemu2")
	I0813 17:38:37.842407    6375 client.go:168] LocalClient.Create starting
	I0813 17:38:37.842483    6375 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:38:37.842513    6375 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:37.842523    6375 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:37.842567    6375 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:38:37.842593    6375 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:37.842606    6375 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:37.842932    6375 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:38:37.985846    6375 main.go:141] libmachine: Creating SSH key...
	I0813 17:38:38.089955    6375 main.go:141] libmachine: Creating Disk image...
	I0813 17:38:38.089962    6375 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:38:38.090157    6375 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2
	I0813 17:38:38.099499    6375 main.go:141] libmachine: STDOUT: 
	I0813 17:38:38.099518    6375 main.go:141] libmachine: STDERR: 
	I0813 17:38:38.099568    6375 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2 +20000M
	I0813 17:38:38.107787    6375 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:38:38.107803    6375 main.go:141] libmachine: STDERR: 
	I0813 17:38:38.107828    6375 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2
	I0813 17:38:38.107834    6375 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:38:38.107848    6375 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:38.107874    6375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:1e:16:d1:6e:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2
	I0813 17:38:38.109461    6375 main.go:141] libmachine: STDOUT: 
	I0813 17:38:38.109478    6375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:38.109504    6375 client.go:171] duration metric: took 267.093958ms to LocalClient.Create
	I0813 17:38:40.111677    6375 start.go:128] duration metric: took 2.29391125s to createHost
	I0813 17:38:40.111730    6375 start.go:83] releasing machines lock for "embed-certs-918000", held for 2.294019042s
	W0813 17:38:40.111796    6375 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:40.125782    6375 out.go:177] * Deleting "embed-certs-918000" in qemu2 ...
	W0813 17:38:40.162016    6375 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:40.162046    6375 start.go:729] Will try again in 5 seconds ...
	I0813 17:38:45.164115    6375 start.go:360] acquireMachinesLock for embed-certs-918000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:45.164370    6375 start.go:364] duration metric: took 187.916µs to acquireMachinesLock for "embed-certs-918000"
	I0813 17:38:45.164464    6375 start.go:93] Provisioning new machine with config: &{Name:embed-certs-918000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:38:45.164721    6375 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:38:45.175270    6375 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:38:45.222231    6375 start.go:159] libmachine.API.Create for "embed-certs-918000" (driver="qemu2")
	I0813 17:38:45.222282    6375 client.go:168] LocalClient.Create starting
	I0813 17:38:45.222396    6375 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:38:45.222453    6375 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:45.222472    6375 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:45.222554    6375 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:38:45.222590    6375 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:45.222613    6375 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:45.223170    6375 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:38:45.406743    6375 main.go:141] libmachine: Creating SSH key...
	I0813 17:38:45.470239    6375 main.go:141] libmachine: Creating Disk image...
	I0813 17:38:45.470245    6375 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:38:45.470418    6375 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2
	I0813 17:38:45.480031    6375 main.go:141] libmachine: STDOUT: 
	I0813 17:38:45.480048    6375 main.go:141] libmachine: STDERR: 
	I0813 17:38:45.480095    6375 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2 +20000M
	I0813 17:38:45.488161    6375 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:38:45.488178    6375 main.go:141] libmachine: STDERR: 
	I0813 17:38:45.488188    6375 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2
	I0813 17:38:45.488193    6375 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:38:45.488203    6375 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:45.488232    6375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:f3:a3:5b:85:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2
	I0813 17:38:45.489863    6375 main.go:141] libmachine: STDOUT: 
	I0813 17:38:45.489878    6375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:45.489891    6375 client.go:171] duration metric: took 267.604833ms to LocalClient.Create
	I0813 17:38:47.492062    6375 start.go:128] duration metric: took 2.3273425s to createHost
	I0813 17:38:47.492119    6375 start.go:83] releasing machines lock for "embed-certs-918000", held for 2.327770666s
	W0813 17:38:47.492380    6375 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-918000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-918000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:47.505901    6375 out.go:177] 
	W0813 17:38:47.514073    6375 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:47.514109    6375 out.go:239] * 
	* 
	W0813 17:38:47.516484    6375 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:38:47.531002    6375 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (65.537834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-216000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-216000 create -f testdata/busybox.yaml: exit status 1 (32.074791ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-216000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-216000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (28.332916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-216000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (27.566875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-216000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-216000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-216000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-216000 describe deploy/metrics-server -n kube-system: exit status 1 (26.12275ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-216000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-216000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (27.604125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-216000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-216000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
E0813 17:38:46.955403    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-216000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.041993s)

                                                
                                                
-- stdout --
	* [no-preload-216000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-216000" primary control-plane node in "no-preload-216000" cluster
	* Restarting existing qemu2 VM for "no-preload-216000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-216000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:46.572915    6426 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:46.573032    6426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:46.573035    6426 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:46.573037    6426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:46.573169    6426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:46.574191    6426 out.go:298] Setting JSON to false
	I0813 17:38:46.589934    6426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4090,"bootTime":1723591836,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:38:46.590012    6426 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:38:46.594259    6426 out.go:177] * [no-preload-216000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:38:46.600265    6426 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:38:46.600350    6426 notify.go:220] Checking for updates...
	I0813 17:38:46.608269    6426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:38:46.612311    6426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:38:46.615308    6426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:38:46.618264    6426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:38:46.621283    6426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:38:46.627380    6426 config.go:182] Loaded profile config "no-preload-216000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:46.627639    6426 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:38:46.631262    6426 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:38:46.638266    6426 start.go:297] selected driver: qemu2
	I0813 17:38:46.638273    6426 start.go:901] validating driver "qemu2" against &{Name:no-preload-216000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-216000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:46.638323    6426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:38:46.640660    6426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:38:46.640713    6426 cni.go:84] Creating CNI manager for ""
	I0813 17:38:46.640720    6426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:38:46.640746    6426 start.go:340] cluster config:
	{Name:no-preload-216000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-216000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:46.644280    6426 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:46.652231    6426 out.go:177] * Starting "no-preload-216000" primary control-plane node in "no-preload-216000" cluster
	I0813 17:38:46.656208    6426 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:38:46.656268    6426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/no-preload-216000/config.json ...
	I0813 17:38:46.656298    6426 cache.go:107] acquiring lock: {Name:mke14a3dc3194db543c276212c81745047c71d9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:46.656321    6426 cache.go:107] acquiring lock: {Name:mka5c282cb2fd549abad1dd055e7de80a0d0f42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:46.656373    6426 cache.go:115] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 17:38:46.656371    6426 cache.go:107] acquiring lock: {Name:mkfe95fbb4d8ba591410cedfc0f07831760c32cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:46.656378    6426 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 83.458µs
	I0813 17:38:46.656384    6426 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 17:38:46.656392    6426 cache.go:107] acquiring lock: {Name:mka9f418446ee3a4dc68a5aadeec40ab9ef6d162 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:46.656393    6426 cache.go:115] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0813 17:38:46.656412    6426 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 110.542µs
	I0813 17:38:46.656416    6426 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0813 17:38:46.656425    6426 cache.go:107] acquiring lock: {Name:mk3da99cd1ea00d6fccd6f5bbe1d9d14f5d81c50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:46.656434    6426 cache.go:115] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0813 17:38:46.656437    6426 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 46.416µs
	I0813 17:38:46.656442    6426 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0813 17:38:46.656447    6426 cache.go:107] acquiring lock: {Name:mk9095444d79c2a6f00b4b011d7f024cb4fe180f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:46.656446    6426 cache.go:107] acquiring lock: {Name:mk8aaee9748bf2bed30221ac00fdfb8c50ae80bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:46.656480    6426 cache.go:115] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0813 17:38:46.656486    6426 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 143.584µs
	I0813 17:38:46.656497    6426 cache.go:115] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0813 17:38:46.656499    6426 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0813 17:38:46.656471    6426 cache.go:115] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0813 17:38:46.656508    6426 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 83.167µs
	I0813 17:38:46.656511    6426 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0813 17:38:46.656502    6426 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 55.25µs
	I0813 17:38:46.656514    6426 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0813 17:38:46.656513    6426 cache.go:107] acquiring lock: {Name:mk70a1bf4c201720c543f0b61415fa6826588f63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:46.656557    6426 cache.go:115] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0813 17:38:46.656554    6426 cache.go:115] /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0813 17:38:46.656561    6426 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 69.042µs
	I0813 17:38:46.656566    6426 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0813 17:38:46.656563    6426 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 265.084µs
	I0813 17:38:46.656573    6426 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0813 17:38:46.656575    6426 cache.go:87] Successfully saved all images to host disk.
	I0813 17:38:46.656663    6426 start.go:360] acquireMachinesLock for no-preload-216000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:47.492259    6426 start.go:364] duration metric: took 835.551958ms to acquireMachinesLock for "no-preload-216000"
	I0813 17:38:47.492392    6426 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:38:47.492420    6426 fix.go:54] fixHost starting: 
	I0813 17:38:47.493048    6426 fix.go:112] recreateIfNeeded on no-preload-216000: state=Stopped err=<nil>
	W0813 17:38:47.493103    6426 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:38:47.509942    6426 out.go:177] * Restarting existing qemu2 VM for "no-preload-216000" ...
	I0813 17:38:47.518024    6426 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:47.518210    6426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:75:88:5b:51:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2
	I0813 17:38:47.527730    6426 main.go:141] libmachine: STDOUT: 
	I0813 17:38:47.527814    6426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:47.527974    6426 fix.go:56] duration metric: took 35.531875ms for fixHost
	I0813 17:38:47.527990    6426 start.go:83] releasing machines lock for "no-preload-216000", held for 35.681416ms
	W0813 17:38:47.528024    6426 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:47.528189    6426 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:47.528208    6426 start.go:729] Will try again in 5 seconds ...
	I0813 17:38:52.530403    6426 start.go:360] acquireMachinesLock for no-preload-216000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:52.530826    6426 start.go:364] duration metric: took 297.75µs to acquireMachinesLock for "no-preload-216000"
	I0813 17:38:52.530932    6426 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:38:52.530951    6426 fix.go:54] fixHost starting: 
	I0813 17:38:52.531699    6426 fix.go:112] recreateIfNeeded on no-preload-216000: state=Stopped err=<nil>
	W0813 17:38:52.531725    6426 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:38:52.536284    6426 out.go:177] * Restarting existing qemu2 VM for "no-preload-216000" ...
	I0813 17:38:52.544294    6426 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:52.544682    6426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:75:88:5b:51:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/no-preload-216000/disk.qcow2
	I0813 17:38:52.554244    6426 main.go:141] libmachine: STDOUT: 
	I0813 17:38:52.554347    6426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:52.554434    6426 fix.go:56] duration metric: took 23.483666ms for fixHost
	I0813 17:38:52.554453    6426 start.go:83] releasing machines lock for "no-preload-216000", held for 23.600625ms
	W0813 17:38:52.554675    6426 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-216000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-216000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:52.562207    6426 out.go:177] 
	W0813 17:38:52.565379    6426 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:52.565440    6426 out.go:239] * 
	* 
	W0813 17:38:52.567841    6426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:38:52.576247    6426 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-216000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (63.922334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-216000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-918000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-918000 create -f testdata/busybox.yaml: exit status 1 (30.057125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-918000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (27.663666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (27.688084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-918000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-918000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-918000 describe deploy/metrics-server -n kube-system: exit status 1 (26.227042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-918000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (27.663708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.831883167s)

                                                
                                                
-- stdout --
	* [embed-certs-918000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-918000" primary control-plane node in "embed-certs-918000" cluster
	* Restarting existing qemu2 VM for "embed-certs-918000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-918000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:49.897822    6463 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:49.897945    6463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:49.897948    6463 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:49.897950    6463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:49.898071    6463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:49.899031    6463 out.go:298] Setting JSON to false
	I0813 17:38:49.914884    6463 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4093,"bootTime":1723591836,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:38:49.914980    6463 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:38:49.918511    6463 out.go:177] * [embed-certs-918000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:38:49.925470    6463 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:38:49.925524    6463 notify.go:220] Checking for updates...
	I0813 17:38:49.930843    6463 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:38:49.933470    6463 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:38:49.936455    6463 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:38:49.939459    6463 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:38:49.946459    6463 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:38:49.950777    6463 config.go:182] Loaded profile config "embed-certs-918000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:49.951040    6463 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:38:49.955415    6463 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:38:49.961449    6463 start.go:297] selected driver: qemu2
	I0813 17:38:49.961457    6463 start.go:901] validating driver "qemu2" against &{Name:embed-certs-918000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:49.961514    6463 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:38:49.963757    6463 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:38:49.963786    6463 cni.go:84] Creating CNI manager for ""
	I0813 17:38:49.963795    6463 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:38:49.963824    6463 start.go:340] cluster config:
	{Name:embed-certs-918000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-918000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:49.967203    6463 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:49.975449    6463 out.go:177] * Starting "embed-certs-918000" primary control-plane node in "embed-certs-918000" cluster
	I0813 17:38:49.979442    6463 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:38:49.979464    6463 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:38:49.979476    6463 cache.go:56] Caching tarball of preloaded images
	I0813 17:38:49.979532    6463 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:38:49.979537    6463 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:38:49.979586    6463 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/embed-certs-918000/config.json ...
	I0813 17:38:49.980027    6463 start.go:360] acquireMachinesLock for embed-certs-918000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:49.980059    6463 start.go:364] duration metric: took 25.833µs to acquireMachinesLock for "embed-certs-918000"
	I0813 17:38:49.980068    6463 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:38:49.980076    6463 fix.go:54] fixHost starting: 
	I0813 17:38:49.980198    6463 fix.go:112] recreateIfNeeded on embed-certs-918000: state=Stopped err=<nil>
	W0813 17:38:49.980206    6463 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:38:49.983502    6463 out.go:177] * Restarting existing qemu2 VM for "embed-certs-918000" ...
	I0813 17:38:49.987446    6463 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:49.987483    6463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:f3:a3:5b:85:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2
	I0813 17:38:49.989407    6463 main.go:141] libmachine: STDOUT: 
	I0813 17:38:49.989425    6463 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:49.989455    6463 fix.go:56] duration metric: took 9.379458ms for fixHost
	I0813 17:38:49.989461    6463 start.go:83] releasing machines lock for "embed-certs-918000", held for 9.397334ms
	W0813 17:38:49.989467    6463 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:49.989514    6463 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:49.989519    6463 start.go:729] Will try again in 5 seconds ...
	I0813 17:38:54.991808    6463 start.go:360] acquireMachinesLock for embed-certs-918000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:55.622530    6463 start.go:364] duration metric: took 630.620333ms to acquireMachinesLock for "embed-certs-918000"
	I0813 17:38:55.622635    6463 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:38:55.622658    6463 fix.go:54] fixHost starting: 
	I0813 17:38:55.623385    6463 fix.go:112] recreateIfNeeded on embed-certs-918000: state=Stopped err=<nil>
	W0813 17:38:55.623416    6463 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:38:55.632784    6463 out.go:177] * Restarting existing qemu2 VM for "embed-certs-918000" ...
	I0813 17:38:55.652736    6463 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:55.652971    6463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:f3:a3:5b:85:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/embed-certs-918000/disk.qcow2
	I0813 17:38:55.662805    6463 main.go:141] libmachine: STDOUT: 
	I0813 17:38:55.662879    6463 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:55.662968    6463 fix.go:56] duration metric: took 40.310209ms for fixHost
	I0813 17:38:55.662990    6463 start.go:83] releasing machines lock for "embed-certs-918000", held for 40.429ms
	W0813 17:38:55.663198    6463 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-918000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-918000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:55.671756    6463 out.go:177] 
	W0813 17:38:55.674928    6463 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:38:55.674973    6463 out.go:239] * 
	* 
	W0813 17:38:55.677186    6463 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:38:55.685776    6463 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (60.584291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-216000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (31.972125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-216000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-216000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-216000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-216000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.782875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-216000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-216000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (28.559084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-216000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-216000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (28.540541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-216000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-216000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-216000 --alsologtostderr -v=1: exit status 83 (37.990125ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-216000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-216000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:52.835165    6486 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:52.835328    6486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:52.835332    6486 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:52.835335    6486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:52.835469    6486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:52.835678    6486 out.go:298] Setting JSON to false
	I0813 17:38:52.835688    6486 mustload.go:65] Loading cluster: no-preload-216000
	I0813 17:38:52.835872    6486 config.go:182] Loaded profile config "no-preload-216000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:52.838698    6486 out.go:177] * The control-plane node no-preload-216000 host is not running: state=Stopped
	I0813 17:38:52.841677    6486 out.go:177]   To start a cluster, run: "minikube start -p no-preload-216000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-216000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (28.542042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-216000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (28.543333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-216000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-607000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-607000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.904477959s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-607000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-607000" primary control-plane node in "default-k8s-diff-port-607000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-607000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:53.244618    6510 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:53.244913    6510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:53.244917    6510 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:53.244920    6510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:53.245197    6510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:53.246433    6510 out.go:298] Setting JSON to false
	I0813 17:38:53.262429    6510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4097,"bootTime":1723591836,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:38:53.262493    6510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:38:53.267718    6510 out.go:177] * [default-k8s-diff-port-607000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:38:53.274652    6510 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:38:53.274687    6510 notify.go:220] Checking for updates...
	I0813 17:38:53.282503    6510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:38:53.286617    6510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:38:53.289690    6510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:38:53.291031    6510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:38:53.293630    6510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:38:53.296980    6510 config.go:182] Loaded profile config "embed-certs-918000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:53.297040    6510 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:53.297100    6510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:38:53.298600    6510 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:38:53.305666    6510 start.go:297] selected driver: qemu2
	I0813 17:38:53.305673    6510 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:38:53.305679    6510 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:38:53.307943    6510 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 17:38:53.310663    6510 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:38:53.314709    6510 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:38:53.314762    6510 cni.go:84] Creating CNI manager for ""
	I0813 17:38:53.314770    6510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:38:53.314775    6510 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:38:53.314802    6510 start.go:340] cluster config:
	{Name:default-k8s-diff-port-607000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:53.318388    6510 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:53.326627    6510 out.go:177] * Starting "default-k8s-diff-port-607000" primary control-plane node in "default-k8s-diff-port-607000" cluster
	I0813 17:38:53.330668    6510 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:38:53.330699    6510 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:38:53.330706    6510 cache.go:56] Caching tarball of preloaded images
	I0813 17:38:53.330768    6510 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:38:53.330773    6510 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:38:53.330832    6510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/default-k8s-diff-port-607000/config.json ...
	I0813 17:38:53.330843    6510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/default-k8s-diff-port-607000/config.json: {Name:mk165b5677d34a5e47c038aebaaaa3dd0ae28b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:38:53.331216    6510 start.go:360] acquireMachinesLock for default-k8s-diff-port-607000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:53.331251    6510 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "default-k8s-diff-port-607000"
	I0813 17:38:53.331263    6510 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:38:53.331295    6510 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:38:53.339623    6510 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:38:53.356513    6510 start.go:159] libmachine.API.Create for "default-k8s-diff-port-607000" (driver="qemu2")
	I0813 17:38:53.356540    6510 client.go:168] LocalClient.Create starting
	I0813 17:38:53.356610    6510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:38:53.356642    6510 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:53.356653    6510 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:53.356693    6510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:38:53.356716    6510 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:53.356724    6510 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:53.357115    6510 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:38:53.498448    6510 main.go:141] libmachine: Creating SSH key...
	I0813 17:38:53.600957    6510 main.go:141] libmachine: Creating Disk image...
	I0813 17:38:53.600968    6510 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:38:53.601153    6510 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2
	I0813 17:38:53.610319    6510 main.go:141] libmachine: STDOUT: 
	I0813 17:38:53.610334    6510 main.go:141] libmachine: STDERR: 
	I0813 17:38:53.610371    6510 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2 +20000M
	I0813 17:38:53.618310    6510 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:38:53.618331    6510 main.go:141] libmachine: STDERR: 
	I0813 17:38:53.618344    6510 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2
	I0813 17:38:53.618350    6510 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:38:53.618360    6510 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:53.618389    6510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:1c:e3:f7:29:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2
	I0813 17:38:53.619973    6510 main.go:141] libmachine: STDOUT: 
	I0813 17:38:53.619989    6510 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:53.620011    6510 client.go:171] duration metric: took 263.468792ms to LocalClient.Create
	I0813 17:38:55.622289    6510 start.go:128] duration metric: took 2.290999666s to createHost
	I0813 17:38:55.622372    6510 start.go:83] releasing machines lock for "default-k8s-diff-port-607000", held for 2.291150167s
	W0813 17:38:55.622426    6510 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:55.649826    6510 out.go:177] * Deleting "default-k8s-diff-port-607000" in qemu2 ...
	W0813 17:38:55.707793    6510 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:55.707839    6510 start.go:729] Will try again in 5 seconds ...
	I0813 17:39:00.710019    6510 start.go:360] acquireMachinesLock for default-k8s-diff-port-607000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:39:00.710433    6510 start.go:364] duration metric: took 326.792µs to acquireMachinesLock for "default-k8s-diff-port-607000"
	I0813 17:39:00.710557    6510 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:39:00.710891    6510 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:39:00.723497    6510 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:39:00.775901    6510 start.go:159] libmachine.API.Create for "default-k8s-diff-port-607000" (driver="qemu2")
	I0813 17:39:00.775955    6510 client.go:168] LocalClient.Create starting
	I0813 17:39:00.776105    6510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:39:00.776168    6510 main.go:141] libmachine: Decoding PEM data...
	I0813 17:39:00.776186    6510 main.go:141] libmachine: Parsing certificate...
	I0813 17:39:00.776258    6510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:39:00.776305    6510 main.go:141] libmachine: Decoding PEM data...
	I0813 17:39:00.776317    6510 main.go:141] libmachine: Parsing certificate...
	I0813 17:39:00.776896    6510 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:39:00.938713    6510 main.go:141] libmachine: Creating SSH key...
	I0813 17:39:01.056019    6510 main.go:141] libmachine: Creating Disk image...
	I0813 17:39:01.056024    6510 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:39:01.056213    6510 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2
	I0813 17:39:01.065536    6510 main.go:141] libmachine: STDOUT: 
	I0813 17:39:01.065554    6510 main.go:141] libmachine: STDERR: 
	I0813 17:39:01.065604    6510 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2 +20000M
	I0813 17:39:01.073494    6510 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:39:01.073510    6510 main.go:141] libmachine: STDERR: 
	I0813 17:39:01.073519    6510 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2
	I0813 17:39:01.073524    6510 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:39:01.073534    6510 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:39:01.073568    6510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:70:37:72:38:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2
	I0813 17:39:01.075114    6510 main.go:141] libmachine: STDOUT: 
	I0813 17:39:01.075127    6510 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:39:01.075141    6510 client.go:171] duration metric: took 299.181333ms to LocalClient.Create
	I0813 17:39:03.077336    6510 start.go:128] duration metric: took 2.36642025s to createHost
	I0813 17:39:03.077490    6510 start.go:83] releasing machines lock for "default-k8s-diff-port-607000", held for 2.367007s
	W0813 17:39:03.077962    6510 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-607000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-607000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:39:03.092619    6510 out.go:177] 
	W0813 17:39:03.097660    6510 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:39:03.097698    6510 out.go:239] * 
	* 
	W0813 17:39:03.100294    6510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:39:03.108517    6510 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-607000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (64.384458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-607000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-918000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (31.544208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-918000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.619583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (27.381458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-918000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (28.993417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-918000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-918000 --alsologtostderr -v=1: exit status 83 (43.605958ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-918000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-918000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:55.945357    6535 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:55.945497    6535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:55.945500    6535 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:55.945503    6535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:55.945622    6535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:55.945816    6535 out.go:298] Setting JSON to false
	I0813 17:38:55.945827    6535 mustload.go:65] Loading cluster: embed-certs-918000
	I0813 17:38:55.945999    6535 config.go:182] Loaded profile config "embed-certs-918000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:55.949274    6535 out.go:177] * The control-plane node embed-certs-918000 host is not running: state=Stopped
	I0813 17:38:55.956360    6535 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-918000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-918000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (28.359709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (27.771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-622000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-622000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.861251541s)

                                                
                                                
-- stdout --
	* [newest-cni-622000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-622000" primary control-plane node in "newest-cni-622000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-622000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:38:56.262406    6552 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:38:56.262531    6552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:56.262535    6552 out.go:304] Setting ErrFile to fd 2...
	I0813 17:38:56.262537    6552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:38:56.262658    6552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:38:56.263762    6552 out.go:298] Setting JSON to false
	I0813 17:38:56.279776    6552 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4100,"bootTime":1723591836,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:38:56.279857    6552 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:38:56.285351    6552 out.go:177] * [newest-cni-622000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:38:56.292199    6552 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:38:56.292239    6552 notify.go:220] Checking for updates...
	I0813 17:38:56.298155    6552 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:38:56.301229    6552 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:38:56.304270    6552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:38:56.307265    6552 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:38:56.310235    6552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:38:56.313554    6552 config.go:182] Loaded profile config "default-k8s-diff-port-607000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:56.313613    6552 config.go:182] Loaded profile config "multinode-980000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:38:56.313664    6552 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:38:56.318140    6552 out.go:177] * Using the qemu2 driver based on user configuration
	I0813 17:38:56.325245    6552 start.go:297] selected driver: qemu2
	I0813 17:38:56.325252    6552 start.go:901] validating driver "qemu2" against <nil>
	I0813 17:38:56.325258    6552 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:38:56.327541    6552 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0813 17:38:56.327571    6552 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0813 17:38:56.335186    6552 out.go:177] * Automatically selected the socket_vmnet network
	I0813 17:38:56.338329    6552 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 17:38:56.338347    6552 cni.go:84] Creating CNI manager for ""
	I0813 17:38:56.338354    6552 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:38:56.338358    6552 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 17:38:56.338391    6552 start.go:340] cluster config:
	{Name:newest-cni-622000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:38:56.342156    6552 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:38:56.350257    6552 out.go:177] * Starting "newest-cni-622000" primary control-plane node in "newest-cni-622000" cluster
	I0813 17:38:56.354123    6552 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:38:56.354136    6552 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:38:56.354143    6552 cache.go:56] Caching tarball of preloaded images
	I0813 17:38:56.354197    6552 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:38:56.354203    6552 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:38:56.354254    6552 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/newest-cni-622000/config.json ...
	I0813 17:38:56.354265    6552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/newest-cni-622000/config.json: {Name:mka2a12b3d273eb7d90a9770179308b6ebe712b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 17:38:56.354481    6552 start.go:360] acquireMachinesLock for newest-cni-622000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:38:56.354517    6552 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "newest-cni-622000"
	I0813 17:38:56.354530    6552 start.go:93] Provisioning new machine with config: &{Name:newest-cni-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:38:56.354564    6552 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:38:56.363055    6552 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:38:56.381571    6552 start.go:159] libmachine.API.Create for "newest-cni-622000" (driver="qemu2")
	I0813 17:38:56.381601    6552 client.go:168] LocalClient.Create starting
	I0813 17:38:56.381669    6552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:38:56.381700    6552 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:56.381710    6552 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:56.381748    6552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:38:56.381773    6552 main.go:141] libmachine: Decoding PEM data...
	I0813 17:38:56.381781    6552 main.go:141] libmachine: Parsing certificate...
	I0813 17:38:56.382142    6552 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:38:56.521634    6552 main.go:141] libmachine: Creating SSH key...
	I0813 17:38:56.655928    6552 main.go:141] libmachine: Creating Disk image...
	I0813 17:38:56.655933    6552 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:38:56.656140    6552 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2
	I0813 17:38:56.665498    6552 main.go:141] libmachine: STDOUT: 
	I0813 17:38:56.665521    6552 main.go:141] libmachine: STDERR: 
	I0813 17:38:56.665563    6552 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2 +20000M
	I0813 17:38:56.673490    6552 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:38:56.673506    6552 main.go:141] libmachine: STDERR: 
	I0813 17:38:56.673518    6552 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2
	I0813 17:38:56.673522    6552 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:38:56.673534    6552 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:38:56.673569    6552 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:13:13:95:2a:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2
	I0813 17:38:56.675144    6552 main.go:141] libmachine: STDOUT: 
	I0813 17:38:56.675163    6552 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:38:56.675182    6552 client.go:171] duration metric: took 293.578625ms to LocalClient.Create
	I0813 17:38:58.677394    6552 start.go:128] duration metric: took 2.322832375s to createHost
	I0813 17:38:58.677502    6552 start.go:83] releasing machines lock for "newest-cni-622000", held for 2.323009917s
	W0813 17:38:58.677637    6552 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:58.694889    6552 out.go:177] * Deleting "newest-cni-622000" in qemu2 ...
	W0813 17:38:58.725769    6552 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:38:58.725806    6552 start.go:729] Will try again in 5 seconds ...
	I0813 17:39:03.726340    6552 start.go:360] acquireMachinesLock for newest-cni-622000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:39:03.726644    6552 start.go:364] duration metric: took 231.25µs to acquireMachinesLock for "newest-cni-622000"
	I0813 17:39:03.726746    6552 start.go:93] Provisioning new machine with config: &{Name:newest-cni-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0813 17:39:03.726953    6552 start.go:125] createHost starting for "" (driver="qemu2")
	I0813 17:39:03.735367    6552 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 17:39:03.779724    6552 start.go:159] libmachine.API.Create for "newest-cni-622000" (driver="qemu2")
	I0813 17:39:03.779786    6552 client.go:168] LocalClient.Create starting
	I0813 17:39:03.779902    6552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/ca.pem
	I0813 17:39:03.779963    6552 main.go:141] libmachine: Decoding PEM data...
	I0813 17:39:03.779983    6552 main.go:141] libmachine: Parsing certificate...
	I0813 17:39:03.780052    6552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19429-1127/.minikube/certs/cert.pem
	I0813 17:39:03.780089    6552 main.go:141] libmachine: Decoding PEM data...
	I0813 17:39:03.780104    6552 main.go:141] libmachine: Parsing certificate...
	I0813 17:39:03.780658    6552 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0813 17:39:03.946812    6552 main.go:141] libmachine: Creating SSH key...
	I0813 17:39:04.025318    6552 main.go:141] libmachine: Creating Disk image...
	I0813 17:39:04.025325    6552 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0813 17:39:04.025516    6552 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2.raw /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2
	I0813 17:39:04.034530    6552 main.go:141] libmachine: STDOUT: 
	I0813 17:39:04.034551    6552 main.go:141] libmachine: STDERR: 
	I0813 17:39:04.034592    6552 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2 +20000M
	I0813 17:39:04.042433    6552 main.go:141] libmachine: STDOUT: Image resized.
	
	I0813 17:39:04.042452    6552 main.go:141] libmachine: STDERR: 
	I0813 17:39:04.042462    6552 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2
	I0813 17:39:04.042468    6552 main.go:141] libmachine: Starting QEMU VM...
	I0813 17:39:04.042487    6552 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:39:04.042523    6552 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:36:2a:82:59:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2
	I0813 17:39:04.044083    6552 main.go:141] libmachine: STDOUT: 
	I0813 17:39:04.044102    6552 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:39:04.044115    6552 client.go:171] duration metric: took 264.327959ms to LocalClient.Create
	I0813 17:39:06.046283    6552 start.go:128] duration metric: took 2.319327625s to createHost
	I0813 17:39:06.046357    6552 start.go:83] releasing machines lock for "newest-cni-622000", held for 2.319732917s
	W0813 17:39:06.046677    6552 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-622000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-622000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:39:06.063381    6552 out.go:177] 
	W0813 17:39:06.067522    6552 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:39:06.067558    6552 out.go:239] * 
	* 
	W0813 17:39:06.072574    6552 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:39:06.085532    6552 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-622000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000: exit status 7 (62.65925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-607000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-607000 create -f testdata/busybox.yaml: exit status 1 (29.673375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-607000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-607000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (28.470375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-607000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (28.522084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-607000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-607000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-607000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-607000 describe deploy/metrics-server -n kube-system: exit status 1 (26.634917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-607000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-607000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (28.087917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-607000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-607000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-607000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.184636208s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-607000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-607000" primary control-plane node in "default-k8s-diff-port-607000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-607000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-607000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:39:07.212535    6624 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:39:07.212655    6624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:39:07.212658    6624 out.go:304] Setting ErrFile to fd 2...
	I0813 17:39:07.212661    6624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:39:07.212783    6624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:39:07.213790    6624 out.go:298] Setting JSON to false
	I0813 17:39:07.229668    6624 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4111,"bootTime":1723591836,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:39:07.229735    6624 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:39:07.234849    6624 out.go:177] * [default-k8s-diff-port-607000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:39:07.240819    6624 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:39:07.240905    6624 notify.go:220] Checking for updates...
	I0813 17:39:07.247799    6624 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:39:07.250816    6624 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:39:07.253851    6624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:39:07.256814    6624 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:39:07.259770    6624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:39:07.263104    6624 config.go:182] Loaded profile config "default-k8s-diff-port-607000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:39:07.263381    6624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:39:07.267812    6624 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:39:07.274812    6624 start.go:297] selected driver: qemu2
	I0813 17:39:07.274819    6624 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-607000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:39:07.274876    6624 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:39:07.277262    6624 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 17:39:07.277305    6624 cni.go:84] Creating CNI manager for ""
	I0813 17:39:07.277313    6624 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:39:07.277334    6624 start.go:340] cluster config:
	{Name:default-k8s-diff-port-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-607000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:39:07.280912    6624 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:39:07.288831    6624 out.go:177] * Starting "default-k8s-diff-port-607000" primary control-plane node in "default-k8s-diff-port-607000" cluster
	I0813 17:39:07.292778    6624 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:39:07.292797    6624 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:39:07.292804    6624 cache.go:56] Caching tarball of preloaded images
	I0813 17:39:07.292854    6624 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:39:07.292860    6624 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:39:07.292919    6624 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/default-k8s-diff-port-607000/config.json ...
	I0813 17:39:07.293328    6624 start.go:360] acquireMachinesLock for default-k8s-diff-port-607000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:39:07.293362    6624 start.go:364] duration metric: took 25.208µs to acquireMachinesLock for "default-k8s-diff-port-607000"
	I0813 17:39:07.293371    6624 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:39:07.293376    6624 fix.go:54] fixHost starting: 
	I0813 17:39:07.293498    6624 fix.go:112] recreateIfNeeded on default-k8s-diff-port-607000: state=Stopped err=<nil>
	W0813 17:39:07.293506    6624 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:39:07.297800    6624 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-607000" ...
	I0813 17:39:07.305785    6624 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:39:07.305823    6624 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:70:37:72:38:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2
	I0813 17:39:07.307928    6624 main.go:141] libmachine: STDOUT: 
	I0813 17:39:07.307950    6624 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:39:07.307979    6624 fix.go:56] duration metric: took 14.6005ms for fixHost
	I0813 17:39:07.307984    6624 start.go:83] releasing machines lock for "default-k8s-diff-port-607000", held for 14.618084ms
	W0813 17:39:07.307989    6624 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:39:07.308019    6624 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:39:07.308023    6624 start.go:729] Will try again in 5 seconds ...
	I0813 17:39:12.310217    6624 start.go:360] acquireMachinesLock for default-k8s-diff-port-607000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:39:12.310731    6624 start.go:364] duration metric: took 357.5µs to acquireMachinesLock for "default-k8s-diff-port-607000"
	I0813 17:39:12.310826    6624 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:39:12.310848    6624 fix.go:54] fixHost starting: 
	I0813 17:39:12.311566    6624 fix.go:112] recreateIfNeeded on default-k8s-diff-port-607000: state=Stopped err=<nil>
	W0813 17:39:12.311596    6624 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:39:12.317186    6624 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-607000" ...
	I0813 17:39:12.326064    6624 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:39:12.326359    6624 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:70:37:72:38:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/default-k8s-diff-port-607000/disk.qcow2
	I0813 17:39:12.335709    6624 main.go:141] libmachine: STDOUT: 
	I0813 17:39:12.335785    6624 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:39:12.335889    6624 fix.go:56] duration metric: took 25.038667ms for fixHost
	I0813 17:39:12.335916    6624 start.go:83] releasing machines lock for "default-k8s-diff-port-607000", held for 25.161959ms
	W0813 17:39:12.336179    6624 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-607000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-607000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:39:12.344166    6624 out.go:177] 
	W0813 17:39:12.347235    6624 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:39:12.347379    6624 out.go:239] * 
	* 
	W0813 17:39:12.349834    6624 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:39:12.357076    6624 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-607000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (66.779625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-607000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-622000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-622000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.178671083s)

                                                
                                                
-- stdout --
	* [newest-cni-622000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-622000" primary control-plane node in "newest-cni-622000" cluster
	* Restarting existing qemu2 VM for "newest-cni-622000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-622000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:39:09.632465    6647 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:39:09.632582    6647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:39:09.632585    6647 out.go:304] Setting ErrFile to fd 2...
	I0813 17:39:09.632587    6647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:39:09.632702    6647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:39:09.633713    6647 out.go:298] Setting JSON to false
	I0813 17:39:09.649360    6647 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4113,"bootTime":1723591836,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 17:39:09.649441    6647 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 17:39:09.654288    6647 out.go:177] * [newest-cni-622000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 17:39:09.659294    6647 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 17:39:09.659349    6647 notify.go:220] Checking for updates...
	I0813 17:39:09.665229    6647 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 17:39:09.668267    6647 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 17:39:09.671319    6647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 17:39:09.672661    6647 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 17:39:09.675241    6647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 17:39:09.678556    6647 config.go:182] Loaded profile config "newest-cni-622000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:39:09.678804    6647 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 17:39:09.683141    6647 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 17:39:09.690293    6647 start.go:297] selected driver: qemu2
	I0813 17:39:09.690300    6647 start.go:901] validating driver "qemu2" against &{Name:newest-cni-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:39:09.690346    6647 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 17:39:09.692665    6647 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0813 17:39:09.692715    6647 cni.go:84] Creating CNI manager for ""
	I0813 17:39:09.692722    6647 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 17:39:09.692749    6647 start.go:340] cluster config:
	{Name:newest-cni-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-622000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 17:39:09.696192    6647 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 17:39:09.703234    6647 out.go:177] * Starting "newest-cni-622000" primary control-plane node in "newest-cni-622000" cluster
	I0813 17:39:09.707280    6647 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 17:39:09.707300    6647 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 17:39:09.707312    6647 cache.go:56] Caching tarball of preloaded images
	I0813 17:39:09.707360    6647 preload.go:172] Found /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 17:39:09.707365    6647 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0813 17:39:09.707426    6647 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/newest-cni-622000/config.json ...
	I0813 17:39:09.707746    6647 start.go:360] acquireMachinesLock for newest-cni-622000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:39:09.707774    6647 start.go:364] duration metric: took 20.375µs to acquireMachinesLock for "newest-cni-622000"
	I0813 17:39:09.707787    6647 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:39:09.707793    6647 fix.go:54] fixHost starting: 
	I0813 17:39:09.707909    6647 fix.go:112] recreateIfNeeded on newest-cni-622000: state=Stopped err=<nil>
	W0813 17:39:09.707917    6647 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:39:09.712272    6647 out.go:177] * Restarting existing qemu2 VM for "newest-cni-622000" ...
	I0813 17:39:09.720260    6647 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:39:09.720293    6647 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:36:2a:82:59:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2
	I0813 17:39:09.722309    6647 main.go:141] libmachine: STDOUT: 
	I0813 17:39:09.722331    6647 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:39:09.722359    6647 fix.go:56] duration metric: took 14.565875ms for fixHost
	I0813 17:39:09.722363    6647 start.go:83] releasing machines lock for "newest-cni-622000", held for 14.585208ms
	W0813 17:39:09.722370    6647 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:39:09.722412    6647 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:39:09.722420    6647 start.go:729] Will try again in 5 seconds ...
	I0813 17:39:14.724561    6647 start.go:360] acquireMachinesLock for newest-cni-622000: {Name:mk5346ad24a289e9288b5edaf98c340f174da29d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 17:39:14.725028    6647 start.go:364] duration metric: took 339.083µs to acquireMachinesLock for "newest-cni-622000"
	I0813 17:39:14.725121    6647 start.go:96] Skipping create...Using existing machine configuration
	I0813 17:39:14.725143    6647 fix.go:54] fixHost starting: 
	I0813 17:39:14.725861    6647 fix.go:112] recreateIfNeeded on newest-cni-622000: state=Stopped err=<nil>
	W0813 17:39:14.725892    6647 fix.go:138] unexpected machine state, will restart: <nil>
	I0813 17:39:14.730529    6647 out.go:177] * Restarting existing qemu2 VM for "newest-cni-622000" ...
	I0813 17:39:14.739314    6647 qemu.go:418] Using hvf for hardware acceleration
	I0813 17:39:14.739701    6647 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:36:2a:82:59:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19429-1127/.minikube/machines/newest-cni-622000/disk.qcow2
	I0813 17:39:14.749670    6647 main.go:141] libmachine: STDOUT: 
	I0813 17:39:14.749756    6647 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0813 17:39:14.749863    6647 fix.go:56] duration metric: took 24.720916ms for fixHost
	I0813 17:39:14.749884    6647 start.go:83] releasing machines lock for "newest-cni-622000", held for 24.830792ms
	W0813 17:39:14.750186    6647 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-622000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-622000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0813 17:39:14.757362    6647 out.go:177] 
	W0813 17:39:14.760442    6647 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0813 17:39:14.760518    6647 out.go:239] * 
	* 
	W0813 17:39:14.763091    6647 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 17:39:14.771329    6647 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-622000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000: exit status 7 (68.592333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-607000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (32.117709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-607000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-607000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-607000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-607000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.764708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-607000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-607000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (28.21875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-607000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-607000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (27.671042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-607000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-607000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-607000 --alsologtostderr -v=1: exit status 83 (39.950125ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-607000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-607000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:39:12.618660    6668 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:39:12.618799    6668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:39:12.618802    6668 out.go:304] Setting ErrFile to fd 2...
	I0813 17:39:12.618805    6668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:39:12.618939    6668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:39:12.619152    6668 out.go:298] Setting JSON to false
	I0813 17:39:12.619161    6668 mustload.go:65] Loading cluster: default-k8s-diff-port-607000
	I0813 17:39:12.619355    6668 config.go:182] Loaded profile config "default-k8s-diff-port-607000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:39:12.623492    6668 out.go:177] * The control-plane node default-k8s-diff-port-607000 host is not running: state=Stopped
	I0813 17:39:12.627443    6668 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-607000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-607000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (28.206291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-607000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (28.489875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-607000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-622000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000: exit status 7 (29.367125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-622000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-622000 --alsologtostderr -v=1: exit status 83 (41.19275ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-622000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-622000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 17:39:14.955099    6696 out.go:291] Setting OutFile to fd 1 ...
	I0813 17:39:14.955224    6696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:39:14.955228    6696 out.go:304] Setting ErrFile to fd 2...
	I0813 17:39:14.955230    6696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 17:39:14.955356    6696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 17:39:14.955598    6696 out.go:298] Setting JSON to false
	I0813 17:39:14.955607    6696 mustload.go:65] Loading cluster: newest-cni-622000
	I0813 17:39:14.955808    6696 config.go:182] Loaded profile config "newest-cni-622000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 17:39:14.959865    6696 out.go:177] * The control-plane node newest-cni-622000 host is not running: state=Stopped
	I0813 17:39:14.963895    6696 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-622000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-622000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000: exit status 7 (28.537417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-622000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000: exit status 7 (28.702708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (156/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 9.32
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.35
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 138.84
29 TestAddons/serial/Volcano 37.39
31 TestAddons/serial/GCPAuth/Namespaces 0.09
33 TestAddons/parallel/Registry 13.53
34 TestAddons/parallel/Ingress 17.65
35 TestAddons/parallel/InspektorGadget 10.29
36 TestAddons/parallel/MetricsServer 5.29
39 TestAddons/parallel/CSI 54.74
40 TestAddons/parallel/Headlamp 11.44
41 TestAddons/parallel/CloudSpanner 5.21
42 TestAddons/parallel/LocalPath 40.9
43 TestAddons/parallel/NvidiaDevicePlugin 5.16
44 TestAddons/parallel/Yakd 11.26
45 TestAddons/StoppedEnableDisable 12.42
53 TestHyperKitDriverInstallOrUpdate 10.68
56 TestErrorSpam/setup 33.19
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.25
59 TestErrorSpam/pause 0.7
60 TestErrorSpam/unpause 0.63
61 TestErrorSpam/stop 64.29
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.38
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 33.09
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.69
73 TestFunctional/serial/CacheCmd/cache/add_local 1.18
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.74
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 56.65
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.62
84 TestFunctional/serial/LogsFileCmd 0.61
85 TestFunctional/serial/InvalidService 4.92
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 9.01
89 TestFunctional/parallel/DryRun 0.26
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 23.97
99 TestFunctional/parallel/SSHCmd 0.12
100 TestFunctional/parallel/CpCmd 0.42
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.39
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
111 TestFunctional/parallel/License 0.37
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.05
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
124 TestFunctional/parallel/ServiceCmd/List 0.32
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.12
127 TestFunctional/parallel/ServiceCmd/Format 0.1
128 TestFunctional/parallel/ServiceCmd/URL 0.09
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.12
130 TestFunctional/parallel/ProfileCmd/profile_list 0.12
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
132 TestFunctional/parallel/MountCmd/any-port 5.9
133 TestFunctional/parallel/MountCmd/specific-port 1.06
134 TestFunctional/parallel/MountCmd/VerifyCleanup 0.62
135 TestFunctional/parallel/Version/short 0.04
136 TestFunctional/parallel/Version/components 0.2
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.14
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
141 TestFunctional/parallel/ImageCommands/ImageBuild 1.84
142 TestFunctional/parallel/ImageCommands/Setup 1.82
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.49
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.31
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
150 TestFunctional/parallel/DockerEnv/bash 0.28
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 185.53
161 TestMultiControlPlane/serial/DeployApp 4.17
162 TestMultiControlPlane/serial/PingHostFromPods 0.71
163 TestMultiControlPlane/serial/AddWorkerNode 59.14
164 TestMultiControlPlane/serial/NodeLabels 0.12
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.23
166 TestMultiControlPlane/serial/CopyFile 4.25
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.01
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 1.88
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 1.02
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.28
277 TestNoKubernetes/serial/Stop 1.87
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
294 TestStartStop/group/old-k8s-version/serial/Stop 3.86
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
307 TestStartStop/group/no-preload/serial/Stop 2.06
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
312 TestStartStop/group/embed-certs/serial/Stop 1.94
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.68
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
332 TestStartStop/group/newest-cni/serial/Stop 3.27
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-133000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-133000: exit status 85 (97.429875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-133000 | jenkins | v1.33.1 | 13 Aug 24 16:46 PDT |          |
	|         | -p download-only-133000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/13 16:46:00
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 16:46:00.636355    1637 out.go:291] Setting OutFile to fd 1 ...
	I0813 16:46:00.636527    1637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:46:00.636531    1637 out.go:304] Setting ErrFile to fd 2...
	I0813 16:46:00.636533    1637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:46:00.636669    1637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	W0813 16:46:00.636763    1637 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19429-1127/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19429-1127/.minikube/config/config.json: no such file or directory
	I0813 16:46:00.638094    1637 out.go:298] Setting JSON to true
	I0813 16:46:00.655393    1637 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":924,"bootTime":1723591836,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 16:46:00.655462    1637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 16:46:00.660012    1637 out.go:97] [download-only-133000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 16:46:00.660120    1637 notify.go:220] Checking for updates...
	W0813 16:46:00.660157    1637 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball: no such file or directory
	I0813 16:46:00.664005    1637 out.go:169] MINIKUBE_LOCATION=19429
	I0813 16:46:00.667087    1637 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 16:46:00.672006    1637 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 16:46:00.674998    1637 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 16:46:00.678054    1637 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	W0813 16:46:00.683997    1637 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0813 16:46:00.684221    1637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 16:46:00.689059    1637 out.go:97] Using the qemu2 driver based on user configuration
	I0813 16:46:00.689079    1637 start.go:297] selected driver: qemu2
	I0813 16:46:00.689094    1637 start.go:901] validating driver "qemu2" against <nil>
	I0813 16:46:00.689199    1637 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 16:46:00.691904    1637 out.go:169] Automatically selected the socket_vmnet network
	I0813 16:46:00.697695    1637 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0813 16:46:00.697796    1637 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 16:46:00.697936    1637 cni.go:84] Creating CNI manager for ""
	I0813 16:46:00.697958    1637 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0813 16:46:00.698008    1637 start.go:340] cluster config:
	{Name:download-only-133000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-133000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 16:46:00.703795    1637 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 16:46:00.708085    1637 out.go:97] Downloading VM boot image ...
	I0813 16:46:00.708105    1637 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso
	I0813 16:46:08.348426    1637 out.go:97] Starting "download-only-133000" primary control-plane node in "download-only-133000" cluster
	I0813 16:46:08.348461    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0813 16:46:08.414771    1637 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0813 16:46:08.414778    1637 cache.go:56] Caching tarball of preloaded images
	I0813 16:46:08.414959    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0813 16:46:08.419133    1637 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0813 16:46:08.419140    1637 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0813 16:46:08.505027    1637 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0813 16:46:16.419141    1637 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0813 16:46:16.419310    1637 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0813 16:46:17.114903    1637 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0813 16:46:17.115118    1637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/download-only-133000/config.json ...
	I0813 16:46:17.115137    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/download-only-133000/config.json: {Name:mk1e307aa0132670a13c259e2d7d9e8dbfa93103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 16:46:17.115387    1637 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0813 16:46:17.115581    1637 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0813 16:46:17.509690    1637 out.go:169] 
	W0813 16:46:17.515887    1637 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19429-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105813920 0x105813920 0x105813920 0x105813920 0x105813920 0x105813920 0x105813920] Decompressors:map[bz2:0x1400013c7b0 gz:0x1400013c7b8 tar:0x1400013c720 tar.bz2:0x1400013c730 tar.gz:0x1400013c740 tar.xz:0x1400013c770 tar.zst:0x1400013c7a0 tbz2:0x1400013c730 tgz:0x1400013c740 txz:0x1400013c770 tzst:0x1400013c7a0 xz:0x1400013c7f0 zip:0x1400013c9d0 zst:0x1400013c7f8] Getters:map[file:0x1400090f1e0 http:0x140000b4230 https:0x140000b44b0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0813 16:46:17.515913    1637 out_reason.go:110] 
	W0813 16:46:17.523726    1637 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 16:46:17.527739    1637 out.go:169] 
	
	
	* The control-plane node download-only-133000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-133000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-133000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (9.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-011000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-011000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (9.3244905s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (9.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-011000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-011000: exit status 85 (75.316791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-133000 | jenkins | v1.33.1 | 13 Aug 24 16:46 PDT |                     |
	|         | -p download-only-133000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 13 Aug 24 16:46 PDT | 13 Aug 24 16:46 PDT |
	| delete  | -p download-only-133000        | download-only-133000 | jenkins | v1.33.1 | 13 Aug 24 16:46 PDT | 13 Aug 24 16:46 PDT |
	| start   | -o=json --download-only        | download-only-011000 | jenkins | v1.33.1 | 13 Aug 24 16:46 PDT |                     |
	|         | -p download-only-011000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/13 16:46:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 16:46:17.940659    1661 out.go:291] Setting OutFile to fd 1 ...
	I0813 16:46:17.940780    1661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:46:17.940784    1661 out.go:304] Setting ErrFile to fd 2...
	I0813 16:46:17.940786    1661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:46:17.940910    1661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 16:46:17.942010    1661 out.go:298] Setting JSON to true
	I0813 16:46:17.957917    1661 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":941,"bootTime":1723591836,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 16:46:17.957986    1661 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 16:46:17.961207    1661 out.go:97] [download-only-011000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 16:46:17.961334    1661 notify.go:220] Checking for updates...
	I0813 16:46:17.965145    1661 out.go:169] MINIKUBE_LOCATION=19429
	I0813 16:46:17.968197    1661 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 16:46:17.972173    1661 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 16:46:17.975120    1661 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 16:46:17.978259    1661 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	W0813 16:46:17.984141    1661 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0813 16:46:17.984350    1661 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 16:46:17.987182    1661 out.go:97] Using the qemu2 driver based on user configuration
	I0813 16:46:17.987191    1661 start.go:297] selected driver: qemu2
	I0813 16:46:17.987195    1661 start.go:901] validating driver "qemu2" against <nil>
	I0813 16:46:17.987252    1661 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 16:46:17.990036    1661 out.go:169] Automatically selected the socket_vmnet network
	I0813 16:46:17.995156    1661 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0813 16:46:17.995252    1661 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 16:46:17.995272    1661 cni.go:84] Creating CNI manager for ""
	I0813 16:46:17.995279    1661 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0813 16:46:17.995286    1661 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 16:46:17.995323    1661 start.go:340] cluster config:
	{Name:download-only-011000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 16:46:17.998721    1661 iso.go:125] acquiring lock: {Name:mkf9cedac1bdb89f9c4761a64ddf78e6e53b5baa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 16:46:18.002206    1661 out.go:97] Starting "download-only-011000" primary control-plane node in "download-only-011000" cluster
	I0813 16:46:18.002217    1661 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 16:46:18.062620    1661 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0813 16:46:18.062645    1661 cache.go:56] Caching tarball of preloaded images
	I0813 16:46:18.062823    1661 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0813 16:46:18.067975    1661 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0813 16:46:18.067982    1661 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0813 16:46:18.151316    1661 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19429-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-011000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-011000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-011000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-227000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-227000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-227000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-680000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-680000: exit status 85 (58.438625ms)

                                                
                                                
-- stdout --
	* Profile "addons-680000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-680000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-680000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-680000: exit status 85 (54.374ms)

                                                
                                                
-- stdout --
	* Profile "addons-680000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-680000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (138.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-680000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-680000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m18.841970292s)
--- PASS: TestAddons/Setup (138.84s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 8.694041ms
addons_test.go:905: volcano-admission stabilized in 8.733291ms
addons_test.go:897: volcano-scheduler stabilized in 8.768458ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-nqgqx" [82e3d668-06f0-4a7b-8acc-03faba120d48] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005536416s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-b5knc" [96a4164d-8da8-4546-af23-9645bca329e6] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.014136708s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-lxvh7" [bedb6e9e-267f-4a55-b63a-efd13b28021e] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004759834s
addons_test.go:932: (dbg) Run:  kubectl --context addons-680000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-680000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-680000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a4e41f12-841b-40b5-8191-198313e9671d] Pending
helpers_test.go:344: "test-job-nginx-0" [a4e41f12-841b-40b5-8191-198313e9671d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a4e41f12-841b-40b5-8191-198313e9671d] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.006939291s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-680000 addons disable volcano --alsologtostderr -v=1: (10.107522708s)
--- PASS: TestAddons/serial/Volcano (37.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-680000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-680000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.081291ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-frfzw" [ddf6096a-c083-4820-ad57-5d9f9612c2f8] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005673708s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xslrc" [5f8794de-2a2c-4ca2-b5d5-28ee4718e3e1] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006298667s
addons_test.go:342: (dbg) Run:  kubectl --context addons-680000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-680000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-680000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.224428792s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 ip
2024/08/13 16:49:53 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-680000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-680000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-680000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [915f7e31-95c5-4be5-9d15-b6937e8d0249] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [915f7e31-95c5-4be5-9d15-b6937e8d0249] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.010321s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-680000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-680000 addons disable ingress --alsologtostderr -v=1: (7.261436291s)
--- PASS: TestAddons/parallel/Ingress (17.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nzgpt" [c3d91fd8-8e5d-4c44-baf6-e8af27e20d36] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004481375s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-680000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-680000: (5.280148834s)
--- PASS: TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.313375ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-vf2nm" [21be09a8-fc15-465b-9e8d-f3a6d1c92d18] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009612083s
addons_test.go:417: (dbg) Run:  kubectl --context addons-680000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.411875ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-680000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-680000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [25a58d91-5b56-4f2a-bd7e-ede2dbea369d] Pending
helpers_test.go:344: "task-pv-pod" [25a58d91-5b56-4f2a-bd7e-ede2dbea369d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [25a58d91-5b56-4f2a-bd7e-ede2dbea369d] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.010829166s
addons_test.go:590: (dbg) Run:  kubectl --context addons-680000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-680000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-680000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-680000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-680000 delete pod task-pv-pod: (1.18579875s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-680000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-680000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-680000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c4028950-89b5-4491-8164-beb313e596b5] Pending
helpers_test.go:344: "task-pv-pod-restore" [c4028950-89b5-4491-8164-beb313e596b5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c4028950-89b5-4491-8164-beb313e596b5] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004998875s
addons_test.go:632: (dbg) Run:  kubectl --context addons-680000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-680000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-680000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-680000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.131937458s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-680000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-zsd95" [54be73e4-d9fd-4973-9c48-65020897a809] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-zsd95" [54be73e4-d9fd-4973-9c48-65020897a809] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.0070785s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.44s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-gqldn" [2f7fc708-04df-46b0-81b9-08e49a2a0dd0] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00875225s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-680000
--- PASS: TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-680000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-680000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-680000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [80480ced-5c2a-4fcc-9d7a-b509288d9915] Pending
helpers_test.go:344: "test-local-path" [80480ced-5c2a-4fcc-9d7a-b509288d9915] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [80480ced-5c2a-4fcc-9d7a-b509288d9915] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [80480ced-5c2a-4fcc-9d7a-b509288d9915] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0057195s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-680000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 ssh "cat /opt/local-path-provisioner/pvc-3270fa28-3781-47c7-999f-d02c079b2127_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-680000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-680000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-680000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.41269575s)
--- PASS: TestAddons/parallel/LocalPath (40.90s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xsnwp" [754bf03e-20f3-427f-ae2d-6df2c1a0f841] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0041385s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-680000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-rq22f" [9934dbba-9361-4a77-a254-50b8678ccb0f] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003766792s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-680000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-680000 addons disable yakd --alsologtostderr -v=1: (5.251914417s)
--- PASS: TestAddons/parallel/Yakd (11.26s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-680000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-680000: (12.229868708s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-680000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-680000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-680000
--- PASS: TestAddons/StoppedEnableDisable (12.42s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.68s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.68s)

                                                
                                    
x
+
TestErrorSpam/setup (33.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-480000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-480000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 --driver=qemu2 : (33.192553291s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (33.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 pause
--- PASS: TestErrorSpam/pause (0.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (64.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 stop: (12.204067417s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 stop: (26.057000166s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-480000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-480000 stop: (26.027849542s)
--- PASS: TestErrorSpam/stop (64.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19429-1127/.minikube/files/etc/test/nested/copy/1635/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0813 16:53:46.999050    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:53:47.008010    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:53:47.021426    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:53:47.044782    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:53:47.088135    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:53:47.171527    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:53:47.334939    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:53:47.658474    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:53:48.301949    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:53:49.585698    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:53:52.150258    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-174000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.375957291s)
--- PASS: TestFunctional/serial/StartWithProxy (47.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --alsologtostderr -v=8
E0813 16:53:57.274244    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:54:07.517709    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-174000 --alsologtostderr -v=8: (33.088525334s)
functional_test.go:663: soft start took 33.088951167s for "functional-174000" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-174000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache add registry.k8s.io/pause:3.1
E0813 16:54:28.000935    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-174000 cache add registry.k8s.io/pause:3.1: (1.003651708s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3293209500/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache add minikube-local-cache-test:functional-174000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache delete minikube-local-cache-test:functional-174000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-174000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (65.687416ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 kubectl -- --context functional-174000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-174000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-174000 get pods: (1.021326709s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (56.65s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0813 16:55:08.964258    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-174000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.651904666s)
functional_test.go:761: restart took 56.652011875s for "functional-174000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (56.65s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-174000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2128410801/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.92s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-174000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-174000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-174000: exit status 115 (146.619417ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30774 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-174000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-174000 delete -f testdata/invalidsvc.yaml: (1.66817775s)
--- PASS: TestFunctional/serial/InvalidService (4.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 config get cpus: exit status 14 (29.805458ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 config get cpus: exit status 14 (31.146041ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-174000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-174000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2293: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-174000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (119.856042ms)

                                                
                                                
-- stdout --
	* [functional-174000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 16:56:16.091022    2275 out.go:291] Setting OutFile to fd 1 ...
	I0813 16:56:16.091170    2275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:56:16.091174    2275 out.go:304] Setting ErrFile to fd 2...
	I0813 16:56:16.091176    2275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:56:16.091304    2275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 16:56:16.092504    2275 out.go:298] Setting JSON to false
	I0813 16:56:16.110918    2275 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1540,"bootTime":1723591836,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 16:56:16.111035    2275 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 16:56:16.114093    2275 out.go:177] * [functional-174000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0813 16:56:16.121103    2275 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 16:56:16.121139    2275 notify.go:220] Checking for updates...
	I0813 16:56:16.128980    2275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 16:56:16.133036    2275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 16:56:16.134371    2275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 16:56:16.137019    2275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 16:56:16.140083    2275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 16:56:16.143418    2275 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 16:56:16.143693    2275 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 16:56:16.147990    2275 out.go:177] * Using the qemu2 driver based on existing profile
	I0813 16:56:16.155068    2275 start.go:297] selected driver: qemu2
	I0813 16:56:16.155075    2275 start.go:901] validating driver "qemu2" against &{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 16:56:16.155122    2275 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 16:56:16.161001    2275 out.go:177] 
	W0813 16:56:16.165057    2275 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0813 16:56:16.168900    2275 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-174000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.736542ms)

                                                
                                                
-- stdout --
	* [functional-174000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 16:56:15.976396    2269 out.go:291] Setting OutFile to fd 1 ...
	I0813 16:56:15.976515    2269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:56:15.976520    2269 out.go:304] Setting ErrFile to fd 2...
	I0813 16:56:15.976522    2269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 16:56:15.976661    2269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
	I0813 16:56:15.978151    2269 out.go:298] Setting JSON to false
	I0813 16:56:15.996109    2269 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1539,"bootTime":1723591836,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0813 16:56:15.996211    2269 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0813 16:56:16.000135    2269 out.go:177] * [functional-174000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0813 16:56:16.003104    2269 notify.go:220] Checking for updates...
	I0813 16:56:16.006973    2269 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 16:56:16.011096    2269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	I0813 16:56:16.014002    2269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0813 16:56:16.017048    2269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 16:56:16.020054    2269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	I0813 16:56:16.023093    2269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 16:56:16.026355    2269 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0813 16:56:16.026626    2269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 16:56:16.031028    2269 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0813 16:56:16.038051    2269 start.go:297] selected driver: qemu2
	I0813 16:56:16.038062    2269 start.go:901] validating driver "qemu2" against &{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 16:56:16.038117    2269 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 16:56:16.045014    2269 out.go:177] 
	W0813 16:56:16.049247    2269 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0813 16:56:16.053019    2269 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2bf9860c-c09e-4957-958b-a1eda71db78e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011338959s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-174000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-174000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-174000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-174000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0d00c9d7-4965-47e9-a283-3122af649d91] Pending
helpers_test.go:344: "sp-pod" [0d00c9d7-4965-47e9-a283-3122af649d91] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0d00c9d7-4965-47e9-a283-3122af649d91] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.011097417s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-174000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-174000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-174000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4d817ba5-9fdb-480a-9937-4281266aeb49] Pending
helpers_test.go:344: "sp-pod" [4d817ba5-9fdb-480a-9937-4281266aeb49] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4d817ba5-9fdb-480a-9937-4281266aeb49] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.014520166s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-174000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cp functional-174000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1360560924/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1635/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/test/nested/copy/1635/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1635.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/1635.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1635.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /usr/share/ca-certificates/1635.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16352.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/16352.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16352.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /usr/share/ca-certificates/16352.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-174000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo systemctl is-active crio": exit status 1 (60.631583ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2131: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-174000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3fbbbc78-4ac4-4555-9fd0-4ac9b4235044] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3fbbbc78-4ac4-4555-9fd0-4ac9b4235044] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00373625s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-174000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.128.98 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-174000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-174000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-pgqjf" [11c54ba2-25e6-47f1-8474-1d08a2be7ca1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-pgqjf" [11c54ba2-25e6-47f1-8474-1d08a2be7ca1] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.011753541s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service list -o json
functional_test.go:1494: Took "278.345458ms" to run "out/minikube-darwin-arm64 -p functional-174000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30272
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30272
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "82.395709ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.892917ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "81.379333ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.793458ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2311625679/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723593368133830000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2311625679/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723593368133830000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2311625679/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723593368133830000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2311625679/001/test-1723593368133830000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Done: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": (1.300333917s)
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 13 23:56 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 13 23:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 13 23:56 test-1723593368133830000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh cat /mount-9p/test-1723593368133830000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-174000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [eb6c4afb-67fb-4432-997f-9b157e1e23eb] Pending
helpers_test.go:344: "busybox-mount" [eb6c4afb-67fb-4432-997f-9b157e1e23eb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [eb6c4afb-67fb-4432-997f-9b157e1e23eb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [eb6c4afb-67fb-4432-997f-9b157e1e23eb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.012448s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-174000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2311625679/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port423540265/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.364292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port423540265/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo umount -f /mount-9p": exit status 1 (61.015667ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-174000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port423540265/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187477251/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187477251/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187477251/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1: exit status 1 (76.519167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-174000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187477251/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187477251/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187477251/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-174000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-174000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-174000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-174000 image ls --format short --alsologtostderr:
I0813 16:56:23.631952    2411 out.go:291] Setting OutFile to fd 1 ...
I0813 16:56:23.632111    2411 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 16:56:23.632115    2411 out.go:304] Setting ErrFile to fd 2...
I0813 16:56:23.632117    2411 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 16:56:23.632253    2411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
I0813 16:56:23.632676    2411 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 16:56:23.632735    2411 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 16:56:23.633675    2411 ssh_runner.go:195] Run: systemctl --version
I0813 16:56:23.633684    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/functional-174000/id_rsa Username:docker}
I0813 16:56:23.657772    2411 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-174000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| localhost/my-image                          | functional-174000 | f8ba08ccaf6cc | 1.41MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-174000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-174000 | ced8b51d83b8d | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| docker.io/library/nginx                     | latest            | 235ff27fe7956 | 193MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-174000 image ls --format table --alsologtostderr:
I0813 16:56:25.500682    2422 out.go:291] Setting OutFile to fd 1 ...
I0813 16:56:25.500824    2422 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 16:56:25.500828    2422 out.go:304] Setting ErrFile to fd 2...
I0813 16:56:25.500831    2422 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 16:56:25.500986    2422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
I0813 16:56:25.501389    2422 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 16:56:25.501453    2422 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 16:56:25.502224    2422 ssh_runner.go:195] Run: systemctl --version
I0813 16:56:25.502234    2422 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/functional-174000/id_rsa Username:docker}
I0813 16:56:25.529335    2422 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-174000 image ls --format json --alsologtostderr:
[{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ced8b51d83b8dc2eb4749d2f9b03e33ebcf3e566c20de96a1a00fbca5b088753","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-174000"],"size":"30"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"afb61768ce3
81961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-174000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags"
:["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"s
ize":"3550000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-174000 image ls --format json --alsologtostderr:
I0813 16:56:25.363572    2420 out.go:291] Setting OutFile to fd 1 ...
I0813 16:56:25.363753    2420 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 16:56:25.363756    2420 out.go:304] Setting ErrFile to fd 2...
I0813 16:56:25.363762    2420 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 16:56:25.363885    2420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
I0813 16:56:25.364351    2420 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 16:56:25.364426    2420 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 16:56:25.365251    2420 ssh_runner.go:195] Run: systemctl --version
I0813 16:56:25.365262    2420 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/functional-174000/id_rsa Username:docker}
I0813 16:56:25.402802    2420 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-174000 image ls --format yaml --alsologtostderr:
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-174000
size: "4780000"
- id: ced8b51d83b8dc2eb4749d2f9b03e33ebcf3e566c20de96a1a00fbca5b088753
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-174000
size: "30"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-174000 image ls --format yaml --alsologtostderr:
I0813 16:56:23.701988    2413 out.go:291] Setting OutFile to fd 1 ...
I0813 16:56:23.702152    2413 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 16:56:23.702156    2413 out.go:304] Setting ErrFile to fd 2...
I0813 16:56:23.702159    2413 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 16:56:23.702299    2413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
I0813 16:56:23.702757    2413 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 16:56:23.702831    2413 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 16:56:23.703761    2413 ssh_runner.go:195] Run: systemctl --version
I0813 16:56:23.703770    2413 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/functional-174000/id_rsa Username:docker}
I0813 16:56:23.732354    2413 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh pgrep buildkitd: exit status 1 (57.260917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image build -t localhost/my-image:functional-174000 testdata/build --alsologtostderr
2024/08/13 16:56:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-174000 image build -t localhost/my-image:functional-174000 testdata/build --alsologtostderr: (1.710551s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-174000 image build -t localhost/my-image:functional-174000 testdata/build --alsologtostderr:
I0813 16:56:23.834533    2417 out.go:291] Setting OutFile to fd 1 ...
I0813 16:56:23.834763    2417 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 16:56:23.834767    2417 out.go:304] Setting ErrFile to fd 2...
I0813 16:56:23.834770    2417 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0813 16:56:23.834893    2417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19429-1127/.minikube/bin
I0813 16:56:23.835360    2417 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 16:56:23.836142    2417 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0813 16:56:23.837035    2417 ssh_runner.go:195] Run: systemctl --version
I0813 16:56:23.837048    2417 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19429-1127/.minikube/machines/functional-174000/id_rsa Username:docker}
I0813 16:56:23.860744    2417 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3972405751.tar
I0813 16:56:23.860799    2417 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0813 16:56:23.865353    2417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3972405751.tar
I0813 16:56:23.868881    2417 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3972405751.tar: stat -c "%s %y" /var/lib/minikube/build/build.3972405751.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3972405751.tar': No such file or directory
I0813 16:56:23.868900    2417 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3972405751.tar --> /var/lib/minikube/build/build.3972405751.tar (3072 bytes)
I0813 16:56:23.878533    2417 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3972405751
I0813 16:56:23.882439    2417 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3972405751 -xf /var/lib/minikube/build/build.3972405751.tar
I0813 16:56:23.885681    2417 docker.go:360] Building image: /var/lib/minikube/build/build.3972405751
I0813 16:56:23.885718    2417 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-174000 /var/lib/minikube/build/build.3972405751
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:f8ba08ccaf6cc02e9ed8fce4cd14dc1e9f6936e3b6bfa787c0924d3faa60d2e0 done
#8 naming to localhost/my-image:functional-174000 done
#8 DONE 0.1s
I0813 16:56:25.500263    2417 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-174000 /var/lib/minikube/build/build.3972405751: (1.614556208s)
I0813 16:56:25.500316    2417 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3972405751
I0813 16:56:25.504784    2417 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3972405751.tar
I0813 16:56:25.508220    2417 build_images.go:217] Built localhost/my-image:functional-174000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3972405751.tar
I0813 16:56:25.508235    2417 build_images.go:133] succeeded building to: functional-174000
I0813 16:56:25.508237    2417 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.804481708s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-174000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image load --daemon kicbase/echo-server:functional-174000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image load --daemon kicbase/echo-server:functional-174000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-174000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image load --daemon kicbase/echo-server:functional-174000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image save kicbase/echo-server:functional-174000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image rm kicbase/echo-server:functional-174000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-174000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image save --daemon kicbase/echo-server:functional-174000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-174000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-174000 docker-env) && out/minikube-darwin-arm64 status -p functional-174000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-174000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-174000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-174000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-174000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (185.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-699000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0813 16:56:30.887198    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:58:46.992352    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 16:59:14.726678    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-699000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m5.3412325s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (185.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-699000 -- rollout status deployment/busybox: (2.668585667s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-5cjff -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-7w22q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-nwzwd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-5cjff -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-7w22q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-nwzwd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-5cjff -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-7w22q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-nwzwd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-5cjff -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-5cjff -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-7w22q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-7w22q -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-nwzwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-699000 -- exec busybox-7dff88458-nwzwd -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-699000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-699000 -v=7 --alsologtostderr: (58.918120167s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-699000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp testdata/cp-test.txt ha-699000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3217486646/001/cp-test_ha-699000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000:/home/docker/cp-test.txt ha-699000-m02:/home/docker/cp-test_ha-699000_ha-699000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m02 "sudo cat /home/docker/cp-test_ha-699000_ha-699000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000:/home/docker/cp-test.txt ha-699000-m03:/home/docker/cp-test_ha-699000_ha-699000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m03 "sudo cat /home/docker/cp-test_ha-699000_ha-699000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000:/home/docker/cp-test.txt ha-699000-m04:/home/docker/cp-test_ha-699000_ha-699000-m04.txt
E0813 17:00:36.807883    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000 "sudo cat /home/docker/cp-test.txt"
E0813 17:00:36.815350    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:00:36.827990    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:00:36.851020    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m04 "sudo cat /home/docker/cp-test_ha-699000_ha-699000-m04.txt"
E0813 17:00:36.892626    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp testdata/cp-test.txt ha-699000-m02:/home/docker/cp-test.txt
E0813 17:00:36.975861    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3217486646/001/cp-test_ha-699000-m02.txt
E0813 17:00:37.139432    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m02:/home/docker/cp-test.txt ha-699000:/home/docker/cp-test_ha-699000-m02_ha-699000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000 "sudo cat /home/docker/cp-test_ha-699000-m02_ha-699000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m02:/home/docker/cp-test.txt ha-699000-m03:/home/docker/cp-test_ha-699000-m02_ha-699000-m03.txt
E0813 17:00:37.460827    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m03 "sudo cat /home/docker/cp-test_ha-699000-m02_ha-699000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m02:/home/docker/cp-test.txt ha-699000-m04:/home/docker/cp-test_ha-699000-m02_ha-699000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m04 "sudo cat /home/docker/cp-test_ha-699000-m02_ha-699000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp testdata/cp-test.txt ha-699000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3217486646/001/cp-test_ha-699000-m03.txt
E0813 17:00:38.102972    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m03:/home/docker/cp-test.txt ha-699000:/home/docker/cp-test_ha-699000-m03_ha-699000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000 "sudo cat /home/docker/cp-test_ha-699000-m03_ha-699000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m03:/home/docker/cp-test.txt ha-699000-m02:/home/docker/cp-test_ha-699000-m03_ha-699000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m02 "sudo cat /home/docker/cp-test_ha-699000-m03_ha-699000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m03:/home/docker/cp-test.txt ha-699000-m04:/home/docker/cp-test_ha-699000-m03_ha-699000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m04 "sudo cat /home/docker/cp-test_ha-699000-m03_ha-699000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp testdata/cp-test.txt ha-699000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3217486646/001/cp-test_ha-699000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m04:/home/docker/cp-test.txt ha-699000:/home/docker/cp-test_ha-699000-m04_ha-699000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m04 "sudo cat /home/docker/cp-test.txt"
E0813 17:00:39.386667    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000 "sudo cat /home/docker/cp-test_ha-699000-m04_ha-699000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m04:/home/docker/cp-test.txt ha-699000-m02:/home/docker/cp-test_ha-699000-m04_ha-699000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m02 "sudo cat /home/docker/cp-test_ha-699000-m04_ha-699000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 cp ha-699000-m04:/home/docker/cp-test.txt ha-699000-m03:/home/docker/cp-test_ha-699000-m04_ha-699000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-699000 ssh -n ha-699000-m03 "sudo cat /home/docker/cp-test_ha-699000-m04_ha-699000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0813 17:10:10.103081    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/addons-680000/client.crt: no such file or directory" logger="UnhandledError"
E0813 17:10:36.822300    1635 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19429-1127/.minikube/profiles/functional-174000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.0110575s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-657000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-657000 --output=json --user=testUser: (1.878216292s)
--- PASS: TestJSONOutput/stop/Command (1.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-513000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-513000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.843542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d769a989-ccf7-4dd2-9c21-ec0c86cc3b16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-513000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a45efc35-4680-4286-a4f6-152cadabee8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19429"}}
	{"specversion":"1.0","id":"6c88d0c4-15a4-4df5-a0dd-4f1a618472d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig"}}
	{"specversion":"1.0","id":"11aae794-15d8-434a-906c-db00074fecec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"03249fba-3cbd-498e-98eb-a79303bcdc61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e8856fe5-712d-4ccf-b462-27a47064e254","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube"}}
	{"specversion":"1.0","id":"daec46fd-fd80-4a5f-8188-057e6d6ae61a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7523c5ae-63d2-4527-90e0-2dfe9a98a0bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-513000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-513000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-702000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-702000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (103.802042ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-702000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19429-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19429-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-702000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-702000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.499083ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-702000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-702000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.621473791s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.65495875s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-702000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-702000: (1.873721208s)
--- PASS: TestNoKubernetes/serial/Stop (1.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-702000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-702000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (37.020291ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-702000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-702000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-967000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-971000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-971000 --alsologtostderr -v=3: (3.860665208s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-971000 -n old-k8s-version-971000: exit status 7 (34.210209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-971000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-216000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-216000 --alsologtostderr -v=3: (2.060551417s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-216000 -n no-preload-216000: exit status 7 (54.873542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-216000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-918000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-918000 --alsologtostderr -v=3: (1.93854s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (54.177875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-918000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-607000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-607000 --alsologtostderr -v=3: (3.680054125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-622000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-622000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-622000 --alsologtostderr -v=3: (3.267351041s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-607000 -n default-k8s-diff-port-607000: exit status 7 (53.96175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-607000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-622000 -n newest-cni-622000: exit status 7 (54.828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-622000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-986000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-986000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-986000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-986000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-986000"

                                                
                                                
----------------------- debugLogs end: cilium-986000 [took: 2.165408s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-986000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-986000
--- SKIP: TestNetworkPlugins/group/cilium (2.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-555000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-555000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard