Test Report: QEMU_macOS 19336

                    
                      86221fe19cf32e1f04d47d4acd0a12df0852414c:2024-07-29:35546
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.27
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.99
55 TestCertOptions 10.16
56 TestCertExpiration 195.41
57 TestDockerFlags 10.41
58 TestForceSystemdFlag 10.2
59 TestForceSystemdEnv 10.57
104 TestFunctional/parallel/ServiceCmdConnect 29.79
176 TestMultiControlPlane/serial/StopSecondaryNode 214.11
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 102.85
178 TestMultiControlPlane/serial/RestartSecondaryNode 208.36
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.38
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
183 TestMultiControlPlane/serial/StopCluster 202.07
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 10.22
193 TestJSONOutput/start/Command 9.82
199 TestJSONOutput/pause/Command 0.08
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.06
225 TestMountStart/serial/StartWithMountFirst 10.03
228 TestMultiNode/serial/FreshStart2Nodes 10.11
229 TestMultiNode/serial/DeployApp2Nodes 85.36
230 TestMultiNode/serial/PingHostFrom2Pods 0.08
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.07
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.13
236 TestMultiNode/serial/StartAfterStop 48.18
237 TestMultiNode/serial/RestartKeepsNodes 8.91
238 TestMultiNode/serial/DeleteNode 0.09
239 TestMultiNode/serial/StopMultiNode 3.35
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 19.98
245 TestPreload 10.13
247 TestScheduledStopUnix 10.04
248 TestSkaffold 12.13
251 TestRunningBinaryUpgrade 585.08
253 TestKubernetesUpgrade 18.35
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.91
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.29
269 TestStoppedBinaryUpgrade/Upgrade 564.03
271 TestPause/serial/Start 10
281 TestNoKubernetes/serial/StartWithK8s 9.91
282 TestNoKubernetes/serial/StartWithStopK8s 5.31
283 TestNoKubernetes/serial/Start 5.32
287 TestNoKubernetes/serial/StartNoArgs 5.26
289 TestNetworkPlugins/group/auto/Start 9.84
290 TestNetworkPlugins/group/calico/Start 9.83
291 TestNetworkPlugins/group/custom-flannel/Start 9.94
292 TestNetworkPlugins/group/false/Start 10
293 TestNetworkPlugins/group/kindnet/Start 9.76
294 TestNetworkPlugins/group/flannel/Start 9.86
295 TestNetworkPlugins/group/enable-default-cni/Start 9.83
296 TestNetworkPlugins/group/bridge/Start 9.9
297 TestNetworkPlugins/group/kubenet/Start 9.89
300 TestStartStop/group/old-k8s-version/serial/FirstStart 10.11
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/old-k8s-version/serial/Pause 0.1
311 TestStartStop/group/no-preload/serial/FirstStart 9.99
312 TestStartStop/group/no-preload/serial/DeployApp 0.09
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
316 TestStartStop/group/no-preload/serial/SecondStart 5.26
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.9
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
322 TestStartStop/group/no-preload/serial/Pause 0.1
324 TestStartStop/group/newest-cni/serial/FirstStart 9.97
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.08
334 TestStartStop/group/newest-cni/serial/SecondStart 6.11
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/embed-certs/serial/FirstStart 9.99
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
345 TestStartStop/group/embed-certs/serial/DeployApp 0.09
346 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
349 TestStartStop/group/embed-certs/serial/SecondStart 5.25
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/embed-certs/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (13.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-388000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-388000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.270908584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"737d439e-d3c4-4e49-b7ec-0b7f44f5aad6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-388000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"83992076-0312-44ee-a562-ccbd07e25248","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19336"}}
	{"specversion":"1.0","id":"f293eec8-14b6-401a-9f66-b1547232c3ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig"}}
	{"specversion":"1.0","id":"a45d402c-a89d-40dd-9ac9-d7a9a827074a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"22b29a6c-57b7-47e3-a74a-f5d8d7c0f889","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5571c03f-2f14-46f0-bd12-9b7c5ccdbc91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube"}}
	{"specversion":"1.0","id":"4497afbe-6883-474c-bc65-2e595217b0ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"0a9ebb16-7a4f-46bc-8a8e-4ecd933c8dbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0d46ecc-c862-4e9c-bc95-32aa14b5a717","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"0b8b2ed6-e583-4b81-be5b-0f7eeaebeafa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"91a386ac-53a0-42e8-92f2-2d5a53fa8fec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-388000\" primary control-plane node in \"download-only-388000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9d8962c-b408-4573-bc97-f6dbc6430dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6451764-37f5-4c53-b6d0-7d48773e2817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19336-945/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60] Decompressors:map[bz2:0x14000504db0 gz:0x14000504db8 tar:0x14000504d60 tar.bz2:0x14000504d70 tar.gz:0x14000504d80 tar.xz:0x14000504d90 tar.zst:0x14000504da0 tbz2:0x14000504d70 tgz:0x140
00504d80 txz:0x14000504d90 tzst:0x14000504da0 xz:0x14000504dc0 zip:0x14000504dd0 zst:0x14000504dc8] Getters:map[file:0x140018825f0 http:0x140006bc230 https:0x140006bc280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"c3feb70d-03d4-4862-8baf-006e0976de8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:34:20.971944    1399 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:34:20.972086    1399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:34:20.972089    1399 out.go:304] Setting ErrFile to fd 2...
	I0729 03:34:20.972092    1399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:34:20.972227    1399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	W0729 03:34:20.972311    1399 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19336-945/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19336-945/.minikube/config/config.json: no such file or directory
	I0729 03:34:20.973542    1399 out.go:298] Setting JSON to true
	I0729 03:34:20.990548    1399 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":223,"bootTime":1722249037,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 03:34:20.990682    1399 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:34:20.996606    1399 out.go:97] [download-only-388000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:34:20.996789    1399 notify.go:220] Checking for updates...
	W0729 03:34:20.996812    1399 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 03:34:21.000282    1399 out.go:169] MINIKUBE_LOCATION=19336
	I0729 03:34:21.003533    1399 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 03:34:21.008522    1399 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:34:21.009975    1399 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:34:21.013456    1399 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	W0729 03:34:21.019470    1399 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:34:21.019670    1399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:34:21.024463    1399 out.go:97] Using the qemu2 driver based on user configuration
	I0729 03:34:21.024482    1399 start.go:297] selected driver: qemu2
	I0729 03:34:21.024495    1399 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:34:21.024565    1399 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:34:21.028451    1399 out.go:169] Automatically selected the socket_vmnet network
	I0729 03:34:21.034132    1399 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 03:34:21.034224    1399 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:34:21.034295    1399 cni.go:84] Creating CNI manager for ""
	I0729 03:34:21.034313    1399 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 03:34:21.034359    1399 start.go:340] cluster config:
	{Name:download-only-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-388000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:34:21.039586    1399 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:34:21.044459    1399 out.go:97] Downloading VM boot image ...
	I0729 03:34:21.044476    1399 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 03:34:26.586055    1399 out.go:97] Starting "download-only-388000" primary control-plane node in "download-only-388000" cluster
	I0729 03:34:26.586075    1399 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:34:26.645831    1399 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:34:26.645841    1399 cache.go:56] Caching tarball of preloaded images
	I0729 03:34:26.645999    1399 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:34:26.650108    1399 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 03:34:26.650115    1399 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:34:26.723389    1399 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:34:32.935456    1399 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:34:32.935620    1399 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:34:33.631853    1399 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 03:34:33.632056    1399 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/download-only-388000/config.json ...
	I0729 03:34:33.632075    1399 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/download-only-388000/config.json: {Name:mk0e53c4345a3115807d78af2fad3c40d51e0602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:34:33.632292    1399 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:34:33.632495    1399 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 03:34:34.168468    1399 out.go:169] 
	W0729 03:34:34.174307    1399 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19336-945/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60] Decompressors:map[bz2:0x14000504db0 gz:0x14000504db8 tar:0x14000504d60 tar.bz2:0x14000504d70 tar.gz:0x14000504d80 tar.xz:0x14000504d90 tar.zst:0x14000504da0 tbz2:0x14000504d70 tgz:0x14000504d80 txz:0x14000504d90 tzst:0x14000504da0 xz:0x14000504dc0 zip:0x14000504dd0 zst:0x14000504dc8] Getters:map[file:0x140018825f0 http:0x140006bc230 https:0x140006bc280] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 03:34:34.174331    1399 out_reason.go:110] 
	W0729 03:34:34.182376    1399 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:34:34.186382    1399 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-388000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19336-945/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-506000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-506000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.834619s)

                                                
                                                
-- stdout --
	* [offline-docker-506000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-506000" primary control-plane node in "offline-docker-506000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-506000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:12:14.732767    3595 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:12:14.732906    3595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:12:14.732909    3595 out.go:304] Setting ErrFile to fd 2...
	I0729 04:12:14.732911    3595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:12:14.733043    3595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:12:14.734150    3595 out.go:298] Setting JSON to false
	I0729 04:12:14.751421    3595 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2497,"bootTime":1722249037,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:12:14.751494    3595 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:12:14.755614    3595 out.go:177] * [offline-docker-506000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:12:14.763421    3595 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:12:14.763451    3595 notify.go:220] Checking for updates...
	I0729 04:12:14.769360    3595 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:12:14.772445    3595 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:12:14.775388    3595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:12:14.776570    3595 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:12:14.779365    3595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:12:14.782773    3595 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:12:14.782829    3595 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:12:14.786201    3595 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:12:14.793337    3595 start.go:297] selected driver: qemu2
	I0729 04:12:14.793347    3595 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:12:14.793354    3595 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:12:14.795277    3595 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:12:14.798361    3595 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:12:14.801576    3595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:12:14.801611    3595 cni.go:84] Creating CNI manager for ""
	I0729 04:12:14.801619    3595 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:12:14.801623    3595 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:12:14.801672    3595 start.go:340] cluster config:
	{Name:offline-docker-506000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:12:14.805499    3595 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:12:14.808387    3595 out.go:177] * Starting "offline-docker-506000" primary control-plane node in "offline-docker-506000" cluster
	I0729 04:12:14.816314    3595 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:12:14.816336    3595 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:12:14.816348    3595 cache.go:56] Caching tarball of preloaded images
	I0729 04:12:14.816412    3595 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:12:14.816417    3595 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:12:14.816470    3595 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/offline-docker-506000/config.json ...
	I0729 04:12:14.816480    3595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/offline-docker-506000/config.json: {Name:mk1e6fb9157bf819afbc946ff68f2fbe203ab015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:12:14.816708    3595 start.go:360] acquireMachinesLock for offline-docker-506000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:12:14.816738    3595 start.go:364] duration metric: took 23.083µs to acquireMachinesLock for "offline-docker-506000"
	I0729 04:12:14.816749    3595 start.go:93] Provisioning new machine with config: &{Name:offline-docker-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:12:14.816798    3595 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:12:14.821291    3595 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:12:14.836953    3595 start.go:159] libmachine.API.Create for "offline-docker-506000" (driver="qemu2")
	I0729 04:12:14.836980    3595 client.go:168] LocalClient.Create starting
	I0729 04:12:14.837051    3595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:12:14.837080    3595 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:14.837089    3595 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:14.837129    3595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:12:14.837163    3595 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:14.837170    3595 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:14.837503    3595 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:12:14.989465    3595 main.go:141] libmachine: Creating SSH key...
	I0729 04:12:15.082504    3595 main.go:141] libmachine: Creating Disk image...
	I0729 04:12:15.082511    3595 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:12:15.082678    3595 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2
	I0729 04:12:15.094049    3595 main.go:141] libmachine: STDOUT: 
	I0729 04:12:15.094070    3595 main.go:141] libmachine: STDERR: 
	I0729 04:12:15.094118    3595 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2 +20000M
	I0729 04:12:15.102240    3595 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:12:15.102265    3595 main.go:141] libmachine: STDERR: 
	I0729 04:12:15.102287    3595 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2
	I0729 04:12:15.102293    3595 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:12:15.102302    3595 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:12:15.102336    3595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:fd:1e:ab:e0:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2
	I0729 04:12:15.104095    3595 main.go:141] libmachine: STDOUT: 
	I0729 04:12:15.104111    3595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:12:15.104132    3595 client.go:171] duration metric: took 267.155792ms to LocalClient.Create
	I0729 04:12:17.106186    3595 start.go:128] duration metric: took 2.289436s to createHost
	I0729 04:12:17.106206    3595 start.go:83] releasing machines lock for "offline-docker-506000", held for 2.289539084s
	W0729 04:12:17.106221    3595 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:17.115209    3595 out.go:177] * Deleting "offline-docker-506000" in qemu2 ...
	W0729 04:12:17.132180    3595 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:17.132189    3595 start.go:729] Will try again in 5 seconds ...
	I0729 04:12:22.134114    3595 start.go:360] acquireMachinesLock for offline-docker-506000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:12:22.134217    3595 start.go:364] duration metric: took 81.916µs to acquireMachinesLock for "offline-docker-506000"
	I0729 04:12:22.134240    3595 start.go:93] Provisioning new machine with config: &{Name:offline-docker-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:12:22.134285    3595 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:12:22.146146    3595 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:12:22.163501    3595 start.go:159] libmachine.API.Create for "offline-docker-506000" (driver="qemu2")
	I0729 04:12:22.163529    3595 client.go:168] LocalClient.Create starting
	I0729 04:12:22.163585    3595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:12:22.163614    3595 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:22.163621    3595 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:22.163652    3595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:12:22.163674    3595 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:22.163679    3595 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:22.163952    3595 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:12:22.316874    3595 main.go:141] libmachine: Creating SSH key...
	I0729 04:12:22.473852    3595 main.go:141] libmachine: Creating Disk image...
	I0729 04:12:22.473861    3595 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:12:22.474040    3595 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2
	I0729 04:12:22.483176    3595 main.go:141] libmachine: STDOUT: 
	I0729 04:12:22.483194    3595 main.go:141] libmachine: STDERR: 
	I0729 04:12:22.483239    3595 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2 +20000M
	I0729 04:12:22.491002    3595 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:12:22.491016    3595 main.go:141] libmachine: STDERR: 
	I0729 04:12:22.491029    3595 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2
	I0729 04:12:22.491033    3595 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:12:22.491045    3595 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:12:22.491080    3595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:10:0d:fe:a2:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/offline-docker-506000/disk.qcow2
	I0729 04:12:22.492661    3595 main.go:141] libmachine: STDOUT: 
	I0729 04:12:22.492684    3595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:12:22.492707    3595 client.go:171] duration metric: took 329.185292ms to LocalClient.Create
	I0729 04:12:24.494826    3595 start.go:128] duration metric: took 2.360594417s to createHost
	I0729 04:12:24.494893    3595 start.go:83] releasing machines lock for "offline-docker-506000", held for 2.360740708s
	W0729 04:12:24.495290    3595 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-506000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-506000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:24.509877    3595 out.go:177] 
	W0729 04:12:24.513999    3595 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:12:24.514023    3595 out.go:239] * 
	* 
	W0729 04:12:24.516904    3595 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:12:24.525847    3595 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-506000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-29 04:12:24.540612 -0700 PDT m=+2283.780891959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-506000 -n offline-docker-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-506000 -n offline-docker-506000: exit status 7 (68.294958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-506000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-506000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-506000
--- FAIL: TestOffline (9.99s)

                                                
                                    
x
+
TestCertOptions (10.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-582000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-582000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.905033s)

                                                
                                                
-- stdout --
	* [cert-options-582000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-582000" primary control-plane node in "cert-options-582000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-582000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-582000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-582000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-582000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-582000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.324625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-582000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-582000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-582000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-582000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-582000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-582000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (39.253333ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-582000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-582000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-582000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-582000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-582000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-29 04:12:55.729919 -0700 PDT m=+2314.971208584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-582000 -n cert-options-582000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-582000 -n cert-options-582000: exit status 7 (29.338292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-582000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-582000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-582000
--- FAIL: TestCertOptions (10.16s)

                                                
                                    
x
+
TestCertExpiration (195.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-099000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-099000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.117898291s)

                                                
                                                
-- stdout --
	* [cert-expiration-099000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-099000" primary control-plane node in "cert-expiration-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-099000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-099000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-099000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.180870041s)

                                                
                                                
-- stdout --
	* [cert-expiration-099000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-099000" primary control-plane node in "cert-expiration-099000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-099000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-099000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-099000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-099000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-099000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-099000" primary control-plane node in "cert-expiration-099000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-099000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-099000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-099000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 04:15:55.768721 -0700 PDT m=+2495.015842209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-099000 -n cert-expiration-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-099000 -n cert-expiration-099000: exit status 7 (29.105834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-099000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-099000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-099000
--- FAIL: TestCertExpiration (195.41s)

                                                
                                    
x
+
TestDockerFlags (10.41s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-470000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-470000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.173171s)

                                                
                                                
-- stdout --
	* [docker-flags-470000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-470000" primary control-plane node in "docker-flags-470000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-470000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:12:35.292840    3788 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:12:35.292985    3788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:12:35.292989    3788 out.go:304] Setting ErrFile to fd 2...
	I0729 04:12:35.292991    3788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:12:35.293125    3788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:12:35.294196    3788 out.go:298] Setting JSON to false
	I0729 04:12:35.310381    3788 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2518,"bootTime":1722249037,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:12:35.310446    3788 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:12:35.314598    3788 out.go:177] * [docker-flags-470000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:12:35.323590    3788 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:12:35.323622    3788 notify.go:220] Checking for updates...
	I0729 04:12:35.329587    3788 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:12:35.331139    3788 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:12:35.334572    3788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:12:35.337553    3788 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:12:35.340589    3788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:12:35.343982    3788 config.go:182] Loaded profile config "force-systemd-flag-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:12:35.344052    3788 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:12:35.344106    3788 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:12:35.348543    3788 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:12:35.355578    3788 start.go:297] selected driver: qemu2
	I0729 04:12:35.355586    3788 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:12:35.355593    3788 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:12:35.357989    3788 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:12:35.361521    3788 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:12:35.364695    3788 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 04:12:35.364740    3788 cni.go:84] Creating CNI manager for ""
	I0729 04:12:35.364748    3788 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:12:35.364752    3788 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:12:35.364780    3788 start.go:340] cluster config:
	{Name:docker-flags-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-470000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:12:35.368609    3788 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:12:35.376545    3788 out.go:177] * Starting "docker-flags-470000" primary control-plane node in "docker-flags-470000" cluster
	I0729 04:12:35.379503    3788 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:12:35.379517    3788 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:12:35.379526    3788 cache.go:56] Caching tarball of preloaded images
	I0729 04:12:35.379585    3788 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:12:35.379593    3788 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:12:35.379658    3788 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/docker-flags-470000/config.json ...
	I0729 04:12:35.379671    3788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/docker-flags-470000/config.json: {Name:mke586573298c3501db0607221af304be4ef580c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:12:35.379892    3788 start.go:360] acquireMachinesLock for docker-flags-470000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:12:35.379927    3788 start.go:364] duration metric: took 29.208µs to acquireMachinesLock for "docker-flags-470000"
	I0729 04:12:35.379939    3788 start.go:93] Provisioning new machine with config: &{Name:docker-flags-470000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-470000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:12:35.379968    3788 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:12:35.384572    3788 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:12:35.402495    3788 start.go:159] libmachine.API.Create for "docker-flags-470000" (driver="qemu2")
	I0729 04:12:35.402522    3788 client.go:168] LocalClient.Create starting
	I0729 04:12:35.402585    3788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:12:35.402615    3788 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:35.402623    3788 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:35.402666    3788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:12:35.402695    3788 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:35.402701    3788 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:35.403048    3788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:12:35.555556    3788 main.go:141] libmachine: Creating SSH key...
	I0729 04:12:35.649722    3788 main.go:141] libmachine: Creating Disk image...
	I0729 04:12:35.649731    3788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:12:35.649931    3788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2
	I0729 04:12:35.658964    3788 main.go:141] libmachine: STDOUT: 
	I0729 04:12:35.658978    3788 main.go:141] libmachine: STDERR: 
	I0729 04:12:35.659028    3788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2 +20000M
	I0729 04:12:35.666737    3788 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:12:35.666751    3788 main.go:141] libmachine: STDERR: 
	I0729 04:12:35.666764    3788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2
	I0729 04:12:35.666768    3788 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:12:35.666782    3788 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:12:35.666809    3788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9b:73:3b:b2:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2
	I0729 04:12:35.668459    3788 main.go:141] libmachine: STDOUT: 
	I0729 04:12:35.668474    3788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:12:35.668490    3788 client.go:171] duration metric: took 265.971459ms to LocalClient.Create
	I0729 04:12:37.670598    3788 start.go:128] duration metric: took 2.290687708s to createHost
	I0729 04:12:37.670761    3788 start.go:83] releasing machines lock for "docker-flags-470000", held for 2.290802375s
	W0729 04:12:37.670818    3788 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:37.688195    3788 out.go:177] * Deleting "docker-flags-470000" in qemu2 ...
	W0729 04:12:37.714700    3788 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:37.714729    3788 start.go:729] Will try again in 5 seconds ...
	I0729 04:12:42.716804    3788 start.go:360] acquireMachinesLock for docker-flags-470000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:12:42.954028    3788 start.go:364] duration metric: took 237.088542ms to acquireMachinesLock for "docker-flags-470000"
	I0729 04:12:42.954153    3788 start.go:93] Provisioning new machine with config: &{Name:docker-flags-470000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-470000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:12:42.954449    3788 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:12:42.970148    3788 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:12:43.021308    3788 start.go:159] libmachine.API.Create for "docker-flags-470000" (driver="qemu2")
	I0729 04:12:43.021370    3788 client.go:168] LocalClient.Create starting
	I0729 04:12:43.021509    3788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:12:43.021565    3788 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:43.021586    3788 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:43.021657    3788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:12:43.021700    3788 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:43.021712    3788 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:43.022375    3788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:12:43.184007    3788 main.go:141] libmachine: Creating SSH key...
	I0729 04:12:43.361590    3788 main.go:141] libmachine: Creating Disk image...
	I0729 04:12:43.361598    3788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:12:43.361799    3788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2
	I0729 04:12:43.371143    3788 main.go:141] libmachine: STDOUT: 
	I0729 04:12:43.371163    3788 main.go:141] libmachine: STDERR: 
	I0729 04:12:43.371212    3788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2 +20000M
	I0729 04:12:43.379026    3788 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:12:43.379038    3788 main.go:141] libmachine: STDERR: 
	I0729 04:12:43.379049    3788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2
	I0729 04:12:43.379054    3788 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:12:43.379067    3788 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:12:43.379105    3788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:4a:87:e1:c7:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/docker-flags-470000/disk.qcow2
	I0729 04:12:43.380677    3788 main.go:141] libmachine: STDOUT: 
	I0729 04:12:43.380693    3788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:12:43.380703    3788 client.go:171] duration metric: took 359.340958ms to LocalClient.Create
	I0729 04:12:45.382814    3788 start.go:128] duration metric: took 2.428412667s to createHost
	I0729 04:12:45.382876    3788 start.go:83] releasing machines lock for "docker-flags-470000", held for 2.428896167s
	W0729 04:12:45.383264    3788 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-470000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-470000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:45.402776    3788 out.go:177] 
	W0729 04:12:45.407790    3788 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:12:45.407815    3788 out.go:239] * 
	* 
	W0729 04:12:45.410724    3788 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:12:45.424683    3788 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-470000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-470000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-470000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.014167ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-470000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-470000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-470000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-470000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-470000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-470000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-470000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-470000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-470000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.5425ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-470000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-470000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-470000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-470000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-470000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-470000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 04:12:45.566469 -0700 PDT m=+2304.807429793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-470000 -n docker-flags-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-470000 -n docker-flags-470000: exit status 7 (29.125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-470000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-470000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-470000
--- FAIL: TestDockerFlags (10.41s)

                                                
                                    
x
+
TestForceSystemdFlag (10.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-475000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-475000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.015950583s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-475000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-475000" primary control-plane node in "force-systemd-flag-475000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:12:30.295772    3765 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:12:30.295907    3765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:12:30.295910    3765 out.go:304] Setting ErrFile to fd 2...
	I0729 04:12:30.295913    3765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:12:30.296050    3765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:12:30.297099    3765 out.go:298] Setting JSON to false
	I0729 04:12:30.312974    3765 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2513,"bootTime":1722249037,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:12:30.313037    3765 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:12:30.319076    3765 out.go:177] * [force-systemd-flag-475000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:12:30.326041    3765 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:12:30.326087    3765 notify.go:220] Checking for updates...
	I0729 04:12:30.334052    3765 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:12:30.337002    3765 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:12:30.340018    3765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:12:30.343031    3765 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:12:30.344397    3765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:12:30.347281    3765 config.go:182] Loaded profile config "force-systemd-env-799000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:12:30.347351    3765 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:12:30.347393    3765 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:12:30.352023    3765 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:12:30.357004    3765 start.go:297] selected driver: qemu2
	I0729 04:12:30.357011    3765 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:12:30.357018    3765 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:12:30.359206    3765 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:12:30.362988    3765 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:12:30.366114    3765 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:12:30.366153    3765 cni.go:84] Creating CNI manager for ""
	I0729 04:12:30.366161    3765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:12:30.366169    3765 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:12:30.366200    3765 start.go:340] cluster config:
	{Name:force-systemd-flag-475000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-475000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:12:30.369842    3765 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:12:30.377008    3765 out.go:177] * Starting "force-systemd-flag-475000" primary control-plane node in "force-systemd-flag-475000" cluster
	I0729 04:12:30.380981    3765 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:12:30.380996    3765 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:12:30.381005    3765 cache.go:56] Caching tarball of preloaded images
	I0729 04:12:30.381070    3765 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:12:30.381076    3765 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:12:30.381125    3765 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/force-systemd-flag-475000/config.json ...
	I0729 04:12:30.381136    3765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/force-systemd-flag-475000/config.json: {Name:mkde58ceae54079248f62c43caeeca3336d19d35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:12:30.381360    3765 start.go:360] acquireMachinesLock for force-systemd-flag-475000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:12:30.381399    3765 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "force-systemd-flag-475000"
	I0729 04:12:30.381413    3765 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-475000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:12:30.381438    3765 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:12:30.389974    3765 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:12:30.407955    3765 start.go:159] libmachine.API.Create for "force-systemd-flag-475000" (driver="qemu2")
	I0729 04:12:30.407984    3765 client.go:168] LocalClient.Create starting
	I0729 04:12:30.408050    3765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:12:30.408082    3765 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:30.408090    3765 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:30.408130    3765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:12:30.408154    3765 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:30.408163    3765 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:30.408599    3765 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:12:30.561467    3765 main.go:141] libmachine: Creating SSH key...
	I0729 04:12:30.686211    3765 main.go:141] libmachine: Creating Disk image...
	I0729 04:12:30.686218    3765 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:12:30.686405    3765 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2
	I0729 04:12:30.695553    3765 main.go:141] libmachine: STDOUT: 
	I0729 04:12:30.695571    3765 main.go:141] libmachine: STDERR: 
	I0729 04:12:30.695622    3765 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2 +20000M
	I0729 04:12:30.703457    3765 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:12:30.703472    3765 main.go:141] libmachine: STDERR: 
	I0729 04:12:30.703496    3765 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2
	I0729 04:12:30.703501    3765 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:12:30.703514    3765 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:12:30.703543    3765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:77:20:24:04:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2
	I0729 04:12:30.705122    3765 main.go:141] libmachine: STDOUT: 
	I0729 04:12:30.705136    3765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:12:30.705165    3765 client.go:171] duration metric: took 297.177208ms to LocalClient.Create
	I0729 04:12:32.707306    3765 start.go:128] duration metric: took 2.325925416s to createHost
	I0729 04:12:32.707357    3765 start.go:83] releasing machines lock for "force-systemd-flag-475000", held for 2.326023s
	W0729 04:12:32.707415    3765 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:32.735385    3765 out.go:177] * Deleting "force-systemd-flag-475000" in qemu2 ...
	W0729 04:12:32.757310    3765 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:32.757329    3765 start.go:729] Will try again in 5 seconds ...
	I0729 04:12:37.759318    3765 start.go:360] acquireMachinesLock for force-systemd-flag-475000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:12:37.759747    3765 start.go:364] duration metric: took 288.833µs to acquireMachinesLock for "force-systemd-flag-475000"
	I0729 04:12:37.759935    3765 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-475000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:12:37.760172    3765 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:12:37.770081    3765 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:12:37.821544    3765 start.go:159] libmachine.API.Create for "force-systemd-flag-475000" (driver="qemu2")
	I0729 04:12:37.821597    3765 client.go:168] LocalClient.Create starting
	I0729 04:12:37.821710    3765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:12:37.821780    3765 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:37.821801    3765 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:37.821874    3765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:12:37.821926    3765 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:37.821939    3765 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:37.822502    3765 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:12:37.995275    3765 main.go:141] libmachine: Creating SSH key...
	I0729 04:12:38.218478    3765 main.go:141] libmachine: Creating Disk image...
	I0729 04:12:38.218486    3765 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:12:38.218719    3765 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2
	I0729 04:12:38.228247    3765 main.go:141] libmachine: STDOUT: 
	I0729 04:12:38.228264    3765 main.go:141] libmachine: STDERR: 
	I0729 04:12:38.228324    3765 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2 +20000M
	I0729 04:12:38.236325    3765 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:12:38.236340    3765 main.go:141] libmachine: STDERR: 
	I0729 04:12:38.236361    3765 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2
	I0729 04:12:38.236366    3765 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:12:38.236374    3765 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:12:38.236400    3765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:50:51:e4:7f:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-flag-475000/disk.qcow2
	I0729 04:12:38.238051    3765 main.go:141] libmachine: STDOUT: 
	I0729 04:12:38.238065    3765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:12:38.238076    3765 client.go:171] duration metric: took 416.486083ms to LocalClient.Create
	I0729 04:12:40.240180    3765 start.go:128] duration metric: took 2.480066708s to createHost
	I0729 04:12:40.240229    3765 start.go:83] releasing machines lock for "force-systemd-flag-475000", held for 2.480521292s
	W0729 04:12:40.240596    3765 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:40.251150    3765 out.go:177] 
	W0729 04:12:40.257185    3765 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:12:40.257210    3765 out.go:239] * 
	* 
	W0729 04:12:40.259875    3765 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:12:40.270121    3765 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-475000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-475000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-475000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (73.287875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-475000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-475000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-475000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 04:12:40.363123 -0700 PDT m=+2299.603915084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-475000 -n force-systemd-flag-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-475000 -n force-systemd-flag-475000: exit status 7 (32.86525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-475000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-475000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-475000
--- FAIL: TestForceSystemdFlag (10.20s)

                                                
                                    
x
+
TestForceSystemdEnv (10.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-799000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-799000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.387306083s)

                                                
                                                
-- stdout --
	* [force-systemd-env-799000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-799000" primary control-plane node in "force-systemd-env-799000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-799000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:12:24.721180    3730 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:12:24.721298    3730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:12:24.721302    3730 out.go:304] Setting ErrFile to fd 2...
	I0729 04:12:24.721305    3730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:12:24.721440    3730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:12:24.722481    3730 out.go:298] Setting JSON to false
	I0729 04:12:24.738759    3730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2507,"bootTime":1722249037,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:12:24.738830    3730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:12:24.745014    3730 out.go:177] * [force-systemd-env-799000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:12:24.751998    3730 notify.go:220] Checking for updates...
	I0729 04:12:24.756998    3730 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:12:24.764934    3730 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:12:24.772830    3730 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:12:24.780972    3730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:12:24.787920    3730 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:12:24.794892    3730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 04:12:24.799368    3730 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:12:24.799416    3730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:12:24.811955    3730 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:12:24.819000    3730 start.go:297] selected driver: qemu2
	I0729 04:12:24.819005    3730 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:12:24.819011    3730 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:12:24.821217    3730 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:12:24.824976    3730 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:12:24.829063    3730 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:12:24.829103    3730 cni.go:84] Creating CNI manager for ""
	I0729 04:12:24.829111    3730 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:12:24.829116    3730 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:12:24.829146    3730 start.go:340] cluster config:
	{Name:force-systemd-env-799000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-799000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:12:24.832695    3730 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:12:24.839961    3730 out.go:177] * Starting "force-systemd-env-799000" primary control-plane node in "force-systemd-env-799000" cluster
	I0729 04:12:24.843958    3730 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:12:24.843977    3730 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:12:24.843985    3730 cache.go:56] Caching tarball of preloaded images
	I0729 04:12:24.844043    3730 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:12:24.844049    3730 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:12:24.844109    3730 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/force-systemd-env-799000/config.json ...
	I0729 04:12:24.844120    3730 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/force-systemd-env-799000/config.json: {Name:mk3d0d482b40e414ed6e5510af2c80cc87a51952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:12:24.844326    3730 start.go:360] acquireMachinesLock for force-systemd-env-799000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:12:24.844361    3730 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "force-systemd-env-799000"
	I0729 04:12:24.844373    3730 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-799000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-799000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:12:24.844405    3730 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:12:24.851976    3730 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:12:24.869113    3730 start.go:159] libmachine.API.Create for "force-systemd-env-799000" (driver="qemu2")
	I0729 04:12:24.869140    3730 client.go:168] LocalClient.Create starting
	I0729 04:12:24.869203    3730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:12:24.869235    3730 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:24.869245    3730 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:24.869283    3730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:12:24.869306    3730 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:24.869314    3730 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:24.869683    3730 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:12:25.022739    3730 main.go:141] libmachine: Creating SSH key...
	I0729 04:12:25.077134    3730 main.go:141] libmachine: Creating Disk image...
	I0729 04:12:25.077139    3730 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:12:25.077331    3730 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2
	I0729 04:12:25.086691    3730 main.go:141] libmachine: STDOUT: 
	I0729 04:12:25.086704    3730 main.go:141] libmachine: STDERR: 
	I0729 04:12:25.086753    3730 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2 +20000M
	I0729 04:12:25.095034    3730 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:12:25.095048    3730 main.go:141] libmachine: STDERR: 
	I0729 04:12:25.095064    3730 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2
	I0729 04:12:25.095067    3730 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:12:25.095080    3730 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:12:25.095106    3730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:b9:40:cf:2c:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2
	I0729 04:12:25.096781    3730 main.go:141] libmachine: STDOUT: 
	I0729 04:12:25.096794    3730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:12:25.096815    3730 client.go:171] duration metric: took 227.676041ms to LocalClient.Create
	I0729 04:12:27.098816    3730 start.go:128] duration metric: took 2.254478291s to createHost
	I0729 04:12:27.098840    3730 start.go:83] releasing machines lock for "force-systemd-env-799000", held for 2.254547375s
	W0729 04:12:27.098855    3730 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:27.108870    3730 out.go:177] * Deleting "force-systemd-env-799000" in qemu2 ...
	W0729 04:12:27.118393    3730 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:27.118403    3730 start.go:729] Will try again in 5 seconds ...
	I0729 04:12:32.120439    3730 start.go:360] acquireMachinesLock for force-systemd-env-799000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:12:32.707515    3730 start.go:364] duration metric: took 586.992041ms to acquireMachinesLock for "force-systemd-env-799000"
	I0729 04:12:32.707644    3730 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-799000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-799000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:12:32.707886    3730 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:12:32.723486    3730 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:12:32.772241    3730 start.go:159] libmachine.API.Create for "force-systemd-env-799000" (driver="qemu2")
	I0729 04:12:32.772284    3730 client.go:168] LocalClient.Create starting
	I0729 04:12:32.772406    3730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:12:32.772468    3730 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:32.772490    3730 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:32.772560    3730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:12:32.772613    3730 main.go:141] libmachine: Decoding PEM data...
	I0729 04:12:32.772626    3730 main.go:141] libmachine: Parsing certificate...
	I0729 04:12:32.773199    3730 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:12:32.935519    3730 main.go:141] libmachine: Creating SSH key...
	I0729 04:12:33.014539    3730 main.go:141] libmachine: Creating Disk image...
	I0729 04:12:33.014550    3730 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:12:33.014719    3730 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2
	I0729 04:12:33.028353    3730 main.go:141] libmachine: STDOUT: 
	I0729 04:12:33.028371    3730 main.go:141] libmachine: STDERR: 
	I0729 04:12:33.028423    3730 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2 +20000M
	I0729 04:12:33.036154    3730 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:12:33.036166    3730 main.go:141] libmachine: STDERR: 
	I0729 04:12:33.036181    3730 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2
	I0729 04:12:33.036185    3730 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:12:33.036203    3730 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:12:33.036235    3730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:3c:e3:28:07:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/force-systemd-env-799000/disk.qcow2
	I0729 04:12:33.037835    3730 main.go:141] libmachine: STDOUT: 
	I0729 04:12:33.037852    3730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:12:33.037863    3730 client.go:171] duration metric: took 265.583292ms to LocalClient.Create
	I0729 04:12:35.039994    3730 start.go:128] duration metric: took 2.332145292s to createHost
	I0729 04:12:35.040068    3730 start.go:83] releasing machines lock for "force-systemd-env-799000", held for 2.332589167s
	W0729 04:12:35.040484    3730 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-799000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-799000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:12:35.049183    3730 out.go:177] 
	W0729 04:12:35.054150    3730 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:12:35.054200    3730 out.go:239] * 
	* 
	W0729 04:12:35.056661    3730 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:12:35.066055    3730 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-799000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-799000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-799000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.3635ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-799000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-799000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-799000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 04:12:35.15758 -0700 PDT m=+2294.398202918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-799000 -n force-systemd-env-799000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-799000 -n force-systemd-env-799000: exit status 7 (33.526292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-799000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-799000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-799000
--- FAIL: TestForceSystemdEnv (10.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-727000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-727000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-7nk7c" [e06c31ad-22bb-4ded-a004-7b082ecec62a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-7nk7c" [e06c31ad-22bb-4ded-a004-7b082ecec62a] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003700958s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:31390
functional_test.go:1657: error fetching http://192.168.105.4:31390: Get "http://192.168.105.4:31390": dial tcp 192.168.105.4:31390: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31390: Get "http://192.168.105.4:31390": dial tcp 192.168.105.4:31390: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31390: Get "http://192.168.105.4:31390": dial tcp 192.168.105.4:31390: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31390: Get "http://192.168.105.4:31390": dial tcp 192.168.105.4:31390: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31390: Get "http://192.168.105.4:31390": dial tcp 192.168.105.4:31390: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31390: Get "http://192.168.105.4:31390": dial tcp 192.168.105.4:31390: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31390: Get "http://192.168.105.4:31390": dial tcp 192.168.105.4:31390: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:31390: Get "http://192.168.105.4:31390": dial tcp 192.168.105.4:31390: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-727000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-7nk7c
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-727000/192.168.105.4
Start Time:       Mon, 29 Jul 2024 03:45:26 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://1c07a052be3f8df284ff27b1613ba8cf473fa443d870713b9c17a2969eb0427b
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 29 Jul 2024 03:45:46 -0700
Finished:     Mon, 29 Jul 2024 03:45:46 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 29 Jul 2024 03:45:30 -0700
Finished:     Mon, 29 Jul 2024 03:45:30 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gp8sh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gp8sh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  28s               default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-7nk7c to functional-727000
Normal   Pulling    28s               kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     25s               kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.039s (3.039s including waiting). Image size: 84957542 bytes.
Normal   Created    9s (x3 over 25s)  kubelet            Created container echoserver-arm
Normal   Started    9s (x3 over 25s)  kubelet            Started container echoserver-arm
Normal   Pulled     9s (x2 over 25s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    9s (x3 over 24s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-7nk7c_default(e06c31ad-22bb-4ded-a004-7b082ecec62a)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-727000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-727000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.92.69
IPs:                      10.96.92.69
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31390/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-727000 -n functional-727000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service | functional-727000 service                                                                                            | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| service | functional-727000 service list                                                                                       | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	| service | functional-727000 service list                                                                                       | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-727000 service                                                                                            | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-727000                                                                                                    | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-727000 service                                                                                            | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh findmnt                                                                                        | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-727000                                                                                                 | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1791735349/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh findmnt                                                                                        | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh -- ls                                                                                          | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh cat                                                                                            | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | /mount-9p/test-1722249948977670000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh stat                                                                                           | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh stat                                                                                           | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh sudo                                                                                           | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh findmnt                                                                                        | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-727000                                                                                                 | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1983177758/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh findmnt                                                                                        | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh -- ls                                                                                          | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh sudo                                                                                           | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh findmnt                                                                                        | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount   | -p functional-727000                                                                                                 | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1296640856/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-727000                                                                                                 | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1296640856/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-727000                                                                                                 | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1296640856/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh findmnt                                                                                        | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT | 29 Jul 24 03:45 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-727000 ssh findmnt                                                                                        | functional-727000 | jenkins | v1.33.1 | 29 Jul 24 03:45 PDT |                     |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:44:06
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:44:06.920941    1912 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:44:06.921068    1912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:06.921070    1912 out.go:304] Setting ErrFile to fd 2...
	I0729 03:44:06.921072    1912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:06.921205    1912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 03:44:06.922425    1912 out.go:298] Setting JSON to false
	I0729 03:44:06.939840    1912 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":809,"bootTime":1722249037,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 03:44:06.939891    1912 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:44:06.944894    1912 out.go:177] * [functional-727000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:44:06.953883    1912 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 03:44:06.953914    1912 notify.go:220] Checking for updates...
	I0729 03:44:06.959834    1912 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 03:44:06.962897    1912 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:44:06.964227    1912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:44:06.966867    1912 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 03:44:06.969870    1912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:44:06.973153    1912 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:44:06.973197    1912 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:44:06.977828    1912 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:44:06.984886    1912 start.go:297] selected driver: qemu2
	I0729 03:44:06.984890    1912 start.go:901] validating driver "qemu2" against &{Name:functional-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:44:06.984936    1912 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:44:06.987137    1912 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:44:06.987154    1912 cni.go:84] Creating CNI manager for ""
	I0729 03:44:06.987161    1912 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:44:06.987198    1912 start.go:340] cluster config:
	{Name:functional-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-727000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:44:06.990590    1912 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:44:06.994821    1912 out.go:177] * Starting "functional-727000" primary control-plane node in "functional-727000" cluster
	I0729 03:44:07.002854    1912 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:44:07.002875    1912 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:44:07.002884    1912 cache.go:56] Caching tarball of preloaded images
	I0729 03:44:07.002942    1912 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:44:07.002946    1912 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:44:07.003007    1912 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/config.json ...
	I0729 03:44:07.003489    1912 start.go:360] acquireMachinesLock for functional-727000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:07.003519    1912 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "functional-727000"
	I0729 03:44:07.003527    1912 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:44:07.003532    1912 fix.go:54] fixHost starting: 
	I0729 03:44:07.004109    1912 fix.go:112] recreateIfNeeded on functional-727000: state=Running err=<nil>
	W0729 03:44:07.004115    1912 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:44:07.008813    1912 out.go:177] * Updating the running qemu2 "functional-727000" VM ...
	I0729 03:44:07.016815    1912 machine.go:94] provisionDockerMachine start ...
	I0729 03:44:07.016853    1912 main.go:141] libmachine: Using SSH client type: native
	I0729 03:44:07.016965    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007daa10] 0x1007dd270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 03:44:07.016968    1912 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 03:44:07.061726    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-727000
	
	I0729 03:44:07.061739    1912 buildroot.go:166] provisioning hostname "functional-727000"
	I0729 03:44:07.061784    1912 main.go:141] libmachine: Using SSH client type: native
	I0729 03:44:07.061902    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007daa10] 0x1007dd270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 03:44:07.061905    1912 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-727000 && echo "functional-727000" | sudo tee /etc/hostname
	I0729 03:44:07.110933    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-727000
	
	I0729 03:44:07.110974    1912 main.go:141] libmachine: Using SSH client type: native
	I0729 03:44:07.111091    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007daa10] 0x1007dd270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 03:44:07.111098    1912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-727000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-727000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-727000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 03:44:07.155954    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 03:44:07.155963    1912 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19336-945/.minikube CaCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19336-945/.minikube}
	I0729 03:44:07.155974    1912 buildroot.go:174] setting up certificates
	I0729 03:44:07.155977    1912 provision.go:84] configureAuth start
	I0729 03:44:07.155983    1912 provision.go:143] copyHostCerts
	I0729 03:44:07.156061    1912 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem, removing ...
	I0729 03:44:07.156065    1912 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem
	I0729 03:44:07.156202    1912 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem (1078 bytes)
	I0729 03:44:07.156390    1912 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem, removing ...
	I0729 03:44:07.156392    1912 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem
	I0729 03:44:07.156541    1912 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem (1123 bytes)
	I0729 03:44:07.156663    1912 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem, removing ...
	I0729 03:44:07.156665    1912 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem
	I0729 03:44:07.156722    1912 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem (1679 bytes)
	I0729 03:44:07.156814    1912 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem org=jenkins.functional-727000 san=[127.0.0.1 192.168.105.4 functional-727000 localhost minikube]
	I0729 03:44:07.260683    1912 provision.go:177] copyRemoteCerts
	I0729 03:44:07.260713    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 03:44:07.260718    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
	I0729 03:44:07.285684    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 03:44:07.293958    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 03:44:07.302645    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 03:44:07.311040    1912 provision.go:87] duration metric: took 155.060542ms to configureAuth
	I0729 03:44:07.311045    1912 buildroot.go:189] setting minikube options for container-runtime
	I0729 03:44:07.311163    1912 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:44:07.311196    1912 main.go:141] libmachine: Using SSH client type: native
	I0729 03:44:07.311276    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007daa10] 0x1007dd270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 03:44:07.311279    1912 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 03:44:07.357064    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 03:44:07.357068    1912 buildroot.go:70] root file system type: tmpfs
	I0729 03:44:07.357114    1912 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 03:44:07.357165    1912 main.go:141] libmachine: Using SSH client type: native
	I0729 03:44:07.357255    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007daa10] 0x1007dd270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 03:44:07.357285    1912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 03:44:07.406506    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 03:44:07.406575    1912 main.go:141] libmachine: Using SSH client type: native
	I0729 03:44:07.406702    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007daa10] 0x1007dd270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 03:44:07.406710    1912 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 03:44:07.452805    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 03:44:07.452811    1912 machine.go:97] duration metric: took 435.9985ms to provisionDockerMachine
	I0729 03:44:07.452816    1912 start.go:293] postStartSetup for "functional-727000" (driver="qemu2")
	I0729 03:44:07.452821    1912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 03:44:07.452868    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 03:44:07.452875    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
	I0729 03:44:07.478477    1912 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 03:44:07.480107    1912 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 03:44:07.480112    1912 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19336-945/.minikube/addons for local assets ...
	I0729 03:44:07.480196    1912 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19336-945/.minikube/files for local assets ...
	I0729 03:44:07.480329    1912 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem -> 13972.pem in /etc/ssl/certs
	I0729 03:44:07.480446    1912 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/test/nested/copy/1397/hosts -> hosts in /etc/test/nested/copy/1397
	I0729 03:44:07.480480    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1397
	I0729 03:44:07.483752    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem --> /etc/ssl/certs/13972.pem (1708 bytes)
	I0729 03:44:07.492647    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/test/nested/copy/1397/hosts --> /etc/test/nested/copy/1397/hosts (40 bytes)
	I0729 03:44:07.501286    1912 start.go:296] duration metric: took 48.465666ms for postStartSetup
	I0729 03:44:07.501297    1912 fix.go:56] duration metric: took 497.772ms for fixHost
	I0729 03:44:07.501335    1912 main.go:141] libmachine: Using SSH client type: native
	I0729 03:44:07.501445    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007daa10] 0x1007dd270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0729 03:44:07.501448    1912 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 03:44:07.547647    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249847.505779706
	
	I0729 03:44:07.547654    1912 fix.go:216] guest clock: 1722249847.505779706
	I0729 03:44:07.547658    1912 fix.go:229] Guest: 2024-07-29 03:44:07.505779706 -0700 PDT Remote: 2024-07-29 03:44:07.501298 -0700 PDT m=+0.598757084 (delta=4.481706ms)
	I0729 03:44:07.547667    1912 fix.go:200] guest clock delta is within tolerance: 4.481706ms
	I0729 03:44:07.547669    1912 start.go:83] releasing machines lock for "functional-727000", held for 544.154375ms
	I0729 03:44:07.547979    1912 ssh_runner.go:195] Run: cat /version.json
	I0729 03:44:07.547984    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
	I0729 03:44:07.548000    1912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 03:44:07.548018    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
	I0729 03:44:07.614607    1912 ssh_runner.go:195] Run: systemctl --version
	I0729 03:44:07.616722    1912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 03:44:07.618596    1912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 03:44:07.618616    1912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 03:44:07.622329    1912 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 03:44:07.622334    1912 start.go:495] detecting cgroup driver to use...
	I0729 03:44:07.622401    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 03:44:07.628602    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0729 03:44:07.632206    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 03:44:07.636233    1912 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 03:44:07.636256    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 03:44:07.639923    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 03:44:07.643483    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 03:44:07.647211    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 03:44:07.650808    1912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 03:44:07.654853    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 03:44:07.658814    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 03:44:07.662813    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 03:44:07.666892    1912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 03:44:07.670698    1912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 03:44:07.674122    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:44:07.795387    1912 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 03:44:07.806101    1912 start.go:495] detecting cgroup driver to use...
	I0729 03:44:07.806167    1912 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 03:44:07.812092    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 03:44:07.817947    1912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 03:44:07.825015    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 03:44:07.831040    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 03:44:07.836591    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 03:44:07.844858    1912 ssh_runner.go:195] Run: which cri-dockerd
	I0729 03:44:07.846347    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 03:44:07.850268    1912 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 03:44:07.856476    1912 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 03:44:07.947432    1912 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 03:44:08.035282    1912 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 03:44:08.035329    1912 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 03:44:08.041825    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:44:08.164232    1912 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 03:44:20.445621    1912 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.281526708s)
	I0729 03:44:20.445684    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 03:44:20.452890    1912 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 03:44:20.462485    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 03:44:20.468908    1912 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 03:44:20.542266    1912 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 03:44:20.617289    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:44:20.696993    1912 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 03:44:20.703968    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 03:44:20.709407    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:44:20.781759    1912 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 03:44:20.810575    1912 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 03:44:20.810640    1912 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 03:44:20.814139    1912 start.go:563] Will wait 60s for crictl version
	I0729 03:44:20.814184    1912 ssh_runner.go:195] Run: which crictl
	I0729 03:44:20.815618    1912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 03:44:20.827627    1912 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.0
	RuntimeApiVersion:  v1
	I0729 03:44:20.827701    1912 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 03:44:20.834908    1912 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 03:44:20.849113    1912 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.0 ...
	I0729 03:44:20.849252    1912 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0729 03:44:20.857961    1912 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0729 03:44:20.862023    1912 kubeadm.go:883] updating cluster {Name:functional-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:functional-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 03:44:20.862072    1912 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:44:20.862107    1912 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 03:44:20.868151    1912 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-727000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0729 03:44:20.868155    1912 docker.go:615] Images already preloaded, skipping extraction
	I0729 03:44:20.868203    1912 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 03:44:20.877417    1912 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-727000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0729 03:44:20.877423    1912 cache_images.go:84] Images are preloaded, skipping loading
	I0729 03:44:20.877426    1912 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.30.3 docker true true} ...
	I0729 03:44:20.877475    1912 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-727000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 03:44:20.877521    1912 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 03:44:20.893111    1912 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0729 03:44:20.893159    1912 cni.go:84] Creating CNI manager for ""
	I0729 03:44:20.893166    1912 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:44:20.893169    1912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 03:44:20.893178    1912 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-727000 NodeName:functional-727000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 03:44:20.893246    1912 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-727000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 03:44:20.893305    1912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 03:44:20.896681    1912 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 03:44:20.896702    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 03:44:20.900012    1912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 03:44:20.905808    1912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 03:44:20.911939    1912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0729 03:44:20.918073    1912 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0729 03:44:20.919522    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:44:20.992375    1912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 03:44:20.998074    1912 certs.go:68] Setting up /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000 for IP: 192.168.105.4
	I0729 03:44:20.998078    1912 certs.go:194] generating shared ca certs ...
	I0729 03:44:20.998085    1912 certs.go:226] acquiring lock for ca certs: {Name:mk0965f831896eb9b1f5dee9ac66a2ece4b593d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:44:20.998265    1912 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19336-945/.minikube/ca.key
	I0729 03:44:20.998315    1912 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.key
	I0729 03:44:20.998324    1912 certs.go:256] generating profile certs ...
	I0729 03:44:20.998409    1912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.key
	I0729 03:44:20.998466    1912 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/apiserver.key.b7148972
	I0729 03:44:20.998515    1912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/proxy-client.key
	I0729 03:44:20.998671    1912 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397.pem (1338 bytes)
	W0729 03:44:20.998701    1912 certs.go:480] ignoring /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397_empty.pem, impossibly tiny 0 bytes
	I0729 03:44:20.998705    1912 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 03:44:20.998733    1912 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem (1078 bytes)
	I0729 03:44:20.998760    1912 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem (1123 bytes)
	I0729 03:44:20.998782    1912 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem (1679 bytes)
	I0729 03:44:20.998834    1912 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem (1708 bytes)
	I0729 03:44:20.999182    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 03:44:21.007759    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 03:44:21.015946    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 03:44:21.023991    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 03:44:21.032040    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 03:44:21.040103    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 03:44:21.048010    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 03:44:21.056523    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 03:44:21.064610    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem --> /usr/share/ca-certificates/13972.pem (1708 bytes)
	I0729 03:44:21.072927    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 03:44:21.081114    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397.pem --> /usr/share/ca-certificates/1397.pem (1338 bytes)
	I0729 03:44:21.089388    1912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 03:44:21.095224    1912 ssh_runner.go:195] Run: openssl version
	I0729 03:44:21.097574    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 03:44:21.101211    1912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:44:21.102838    1912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:35 /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:44:21.102853    1912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:44:21.105025    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 03:44:21.108566    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1397.pem && ln -fs /usr/share/ca-certificates/1397.pem /etc/ssl/certs/1397.pem"
	I0729 03:44:21.112191    1912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1397.pem
	I0729 03:44:21.113791    1912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:42 /usr/share/ca-certificates/1397.pem
	I0729 03:44:21.113808    1912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1397.pem
	I0729 03:44:21.115913    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1397.pem /etc/ssl/certs/51391683.0"
	I0729 03:44:21.119520    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13972.pem && ln -fs /usr/share/ca-certificates/13972.pem /etc/ssl/certs/13972.pem"
	I0729 03:44:21.123657    1912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13972.pem
	I0729 03:44:21.125474    1912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:42 /usr/share/ca-certificates/13972.pem
	I0729 03:44:21.125494    1912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13972.pem
	I0729 03:44:21.127429    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13972.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 03:44:21.130976    1912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 03:44:21.132669    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 03:44:21.134740    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 03:44:21.136905    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 03:44:21.138935    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 03:44:21.140894    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 03:44:21.142866    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 03:44:21.144963    1912 kubeadm.go:392] StartCluster: {Name:functional-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:functional-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:44:21.145030    1912 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 03:44:21.151024    1912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 03:44:21.154872    1912 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 03:44:21.154875    1912 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 03:44:21.154899    1912 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 03:44:21.158501    1912 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:44:21.158813    1912 kubeconfig.go:125] found "functional-727000" server: "https://192.168.105.4:8441"
	I0729 03:44:21.159481    1912 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 03:44:21.163120    1912 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0729 03:44:21.163124    1912 kubeadm.go:1160] stopping kube-system containers ...
	I0729 03:44:21.163162    1912 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 03:44:21.170429    1912 docker.go:483] Stopping containers: [ee639b072db1 6bb19568d73f 1d761281110d f3a48175ff52 a7c87828c7cb 2a0feb3f0802 e5ff4359d790 b13162454ce3 d64e8d535aec 314e3b4e109b acca8e911846 28965a3f0baa 9f86fab38730 0110f04c70de e5bf8a675684 e1fbaa9e445c 32b7026ddbb2 484bbf8ba87b a69559830473 26c275ccfa0c 55fd7f1396bc fd5c6c5f053f 2d070dab283a 1ec24f78a908 939e18e4c93e 0f77e6629c3d f060be06e574 6921aef1713b 09633fb9c2a2 cdbe90536910]
	I0729 03:44:21.170492    1912 ssh_runner.go:195] Run: docker stop ee639b072db1 6bb19568d73f 1d761281110d f3a48175ff52 a7c87828c7cb 2a0feb3f0802 e5ff4359d790 b13162454ce3 d64e8d535aec 314e3b4e109b acca8e911846 28965a3f0baa 9f86fab38730 0110f04c70de e5bf8a675684 e1fbaa9e445c 32b7026ddbb2 484bbf8ba87b a69559830473 26c275ccfa0c 55fd7f1396bc fd5c6c5f053f 2d070dab283a 1ec24f78a908 939e18e4c93e 0f77e6629c3d f060be06e574 6921aef1713b 09633fb9c2a2 cdbe90536910
	I0729 03:44:21.177782    1912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 03:44:21.268663    1912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 03:44:21.273970    1912 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Jul 29 10:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 29 10:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 29 10:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 29 10:43 /etc/kubernetes/scheduler.conf
	
	I0729 03:44:21.274000    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0729 03:44:21.278625    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0729 03:44:21.283035    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0729 03:44:21.287201    1912 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:44:21.287227    1912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 03:44:21.291386    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0729 03:44:21.295013    1912 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:44:21.295029    1912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 03:44:21.298818    1912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 03:44:21.302262    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:44:21.320684    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:44:21.972170    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:44:22.091878    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:44:22.116203    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:44:22.150521    1912 api_server.go:52] waiting for apiserver process to appear ...
	I0729 03:44:22.150580    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:44:22.652637    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:44:23.152659    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:44:23.157940    1912 api_server.go:72] duration metric: took 1.007432542s to wait for apiserver process to appear ...
	I0729 03:44:23.157946    1912 api_server.go:88] waiting for apiserver healthz status ...
	I0729 03:44:23.157954    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 03:44:24.743020    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 03:44:24.743030    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 03:44:24.743035    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 03:44:24.782924    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 03:44:24.782933    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 03:44:25.159988    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 03:44:25.163871    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 03:44:25.163878    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 03:44:25.659960    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 03:44:25.663113    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 03:44:25.663123    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 03:44:26.159980    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 03:44:26.162837    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0729 03:44:26.166713    1912 api_server.go:141] control plane version: v1.30.3
	I0729 03:44:26.166719    1912 api_server.go:131] duration metric: took 3.00880775s to wait for apiserver health ...
	I0729 03:44:26.166723    1912 cni.go:84] Creating CNI manager for ""
	I0729 03:44:26.166729    1912 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:44:26.171935    1912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 03:44:26.175739    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 03:44:26.179495    1912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 03:44:26.185230    1912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 03:44:26.189771    1912 system_pods.go:59] 7 kube-system pods found
	I0729 03:44:26.189779    1912 system_pods.go:61] "coredns-7db6d8ff4d-r657x" [e6e7d731-c3e5-44de-843c-fa9855c5626c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 03:44:26.189782    1912 system_pods.go:61] "etcd-functional-727000" [8291e549-9874-4c84-b689-815842a5507e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 03:44:26.189784    1912 system_pods.go:61] "kube-apiserver-functional-727000" [c1ddc8bd-5789-4e2a-ae34-829caeee8e82] Pending
	I0729 03:44:26.189787    1912 system_pods.go:61] "kube-controller-manager-functional-727000" [0759a4c0-b38a-42d2-a5c9-df7709dc2c6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 03:44:26.189789    1912 system_pods.go:61] "kube-proxy-qrmbh" [78023809-9d7d-474b-b719-42e294c40f10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 03:44:26.189791    1912 system_pods.go:61] "kube-scheduler-functional-727000" [9349b682-319b-4080-82c6-0636a1b75dcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 03:44:26.189792    1912 system_pods.go:61] "storage-provisioner" [0aae2c8c-d0d5-4353-9079-12f26ea44af1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 03:44:26.189794    1912 system_pods.go:74] duration metric: took 4.561334ms to wait for pod list to return data ...
	I0729 03:44:26.189797    1912 node_conditions.go:102] verifying NodePressure condition ...
	I0729 03:44:26.191179    1912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 03:44:26.191183    1912 node_conditions.go:123] node cpu capacity is 2
	I0729 03:44:26.191188    1912 node_conditions.go:105] duration metric: took 1.388959ms to run NodePressure ...
	I0729 03:44:26.191196    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:44:26.412938    1912 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 03:44:26.415270    1912 kubeadm.go:739] kubelet initialised
	I0729 03:44:26.415274    1912 kubeadm.go:740] duration metric: took 2.328291ms waiting for restarted kubelet to initialise ...
	I0729 03:44:26.415277    1912 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 03:44:26.417639    1912 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace to be "Ready" ...
	I0729 03:44:28.422888    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:30.922734    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:33.422076    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:35.922211    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:37.922419    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:40.422100    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:42.422473    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:44.922546    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:47.422401    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:49.922404    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:52.422157    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:54.922256    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:57.421875    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:44:59.922238    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:45:01.922341    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:45:04.422210    1912 pod_ready.go:102] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"False"
	I0729 03:45:05.421838    1912 pod_ready.go:92] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:05.421844    1912 pod_ready.go:81] duration metric: took 39.004681542s for pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.421848    1912 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.423915    1912 pod_ready.go:92] pod "etcd-functional-727000" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:05.423918    1912 pod_ready.go:81] duration metric: took 2.068042ms for pod "etcd-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.423921    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.425982    1912 pod_ready.go:92] pod "kube-apiserver-functional-727000" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:05.425988    1912 pod_ready.go:81] duration metric: took 2.0645ms for pod "kube-apiserver-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.425991    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.428167    1912 pod_ready.go:92] pod "kube-controller-manager-functional-727000" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:05.428169    1912 pod_ready.go:81] duration metric: took 2.176625ms for pod "kube-controller-manager-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.428172    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qrmbh" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.430006    1912 pod_ready.go:92] pod "kube-proxy-qrmbh" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:05.430008    1912 pod_ready.go:81] duration metric: took 1.833792ms for pod "kube-proxy-qrmbh" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.430011    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.822614    1912 pod_ready.go:92] pod "kube-scheduler-functional-727000" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:05.822620    1912 pod_ready.go:81] duration metric: took 392.610833ms for pod "kube-scheduler-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:05.822623    1912 pod_ready.go:38] duration metric: took 39.40782825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 03:45:05.822631    1912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 03:45:05.826797    1912 ops.go:34] apiserver oom_adj: -16
	I0729 03:45:05.826801    1912 kubeadm.go:597] duration metric: took 44.672474041s to restartPrimaryControlPlane
	I0729 03:45:05.826804    1912 kubeadm.go:394] duration metric: took 44.682394209s to StartCluster
	I0729 03:45:05.826812    1912 settings.go:142] acquiring lock: {Name:mkb57b03ccb64deee52152ed8ac01af4d9e1ee07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:45:05.826906    1912 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 03:45:05.827274    1912 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/kubeconfig: {Name:mkc1463454d977493e341af62af023d087f8e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:45:05.827502    1912 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:45:05.827521    1912 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 03:45:05.827554    1912 addons.go:69] Setting storage-provisioner=true in profile "functional-727000"
	I0729 03:45:05.827558    1912 addons.go:69] Setting default-storageclass=true in profile "functional-727000"
	I0729 03:45:05.827566    1912 addons.go:234] Setting addon storage-provisioner=true in "functional-727000"
	W0729 03:45:05.827568    1912 addons.go:243] addon storage-provisioner should already be in state true
	I0729 03:45:05.827584    1912 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:05.827588    1912 host.go:66] Checking if "functional-727000" exists ...
	I0729 03:45:05.827623    1912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-727000"
	I0729 03:45:05.828732    1912 addons.go:234] Setting addon default-storageclass=true in "functional-727000"
	W0729 03:45:05.828736    1912 addons.go:243] addon default-storageclass should already be in state true
	I0729 03:45:05.828742    1912 host.go:66] Checking if "functional-727000" exists ...
	I0729 03:45:05.831193    1912 out.go:177] * Verifying Kubernetes components...
	I0729 03:45:05.831542    1912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 03:45:05.834260    1912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 03:45:05.834269    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
	I0729 03:45:05.838056    1912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:45:05.842072    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:45:05.846105    1912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 03:45:05.846108    1912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 03:45:05.846113    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
	I0729 03:45:05.953354    1912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 03:45:05.959059    1912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 03:45:05.962921    1912 node_ready.go:35] waiting up to 6m0s for node "functional-727000" to be "Ready" ...
	I0729 03:45:05.999475    1912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 03:45:06.020235    1912 node_ready.go:49] node "functional-727000" has status "Ready":"True"
	I0729 03:45:06.020241    1912 node_ready.go:38] duration metric: took 57.31375ms for node "functional-727000" to be "Ready" ...
	I0729 03:45:06.020244    1912 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 03:45:06.224352    1912 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:06.286516    1912 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 03:45:06.290565    1912 addons.go:510] duration metric: took 463.054209ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 03:45:06.622769    1912 pod_ready.go:92] pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:06.622777    1912 pod_ready.go:81] duration metric: took 398.424541ms for pod "coredns-7db6d8ff4d-r657x" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:06.622781    1912 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:07.022696    1912 pod_ready.go:92] pod "etcd-functional-727000" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:07.022704    1912 pod_ready.go:81] duration metric: took 399.923667ms for pod "etcd-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:07.022708    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:07.421544    1912 pod_ready.go:92] pod "kube-apiserver-functional-727000" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:07.421549    1912 pod_ready.go:81] duration metric: took 398.8435ms for pod "kube-apiserver-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:07.421553    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:07.822848    1912 pod_ready.go:92] pod "kube-controller-manager-functional-727000" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:07.822856    1912 pod_ready.go:81] duration metric: took 401.304458ms for pod "kube-controller-manager-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:07.822860    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrmbh" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:08.222661    1912 pod_ready.go:92] pod "kube-proxy-qrmbh" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:08.222666    1912 pod_ready.go:81] duration metric: took 399.808583ms for pod "kube-proxy-qrmbh" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:08.222670    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:08.622529    1912 pod_ready.go:92] pod "kube-scheduler-functional-727000" in "kube-system" namespace has status "Ready":"True"
	I0729 03:45:08.622536    1912 pod_ready.go:81] duration metric: took 399.867791ms for pod "kube-scheduler-functional-727000" in "kube-system" namespace to be "Ready" ...
	I0729 03:45:08.622540    1912 pod_ready.go:38] duration metric: took 2.602323833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 03:45:08.622550    1912 api_server.go:52] waiting for apiserver process to appear ...
	I0729 03:45:08.622628    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:45:08.628692    1912 api_server.go:72] duration metric: took 2.801214125s to wait for apiserver process to appear ...
	I0729 03:45:08.628698    1912 api_server.go:88] waiting for apiserver healthz status ...
	I0729 03:45:08.628709    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0729 03:45:08.631419    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0729 03:45:08.632047    1912 api_server.go:141] control plane version: v1.30.3
	I0729 03:45:08.632050    1912 api_server.go:131] duration metric: took 3.350459ms to wait for apiserver health ...
	I0729 03:45:08.632053    1912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 03:45:08.824239    1912 system_pods.go:59] 7 kube-system pods found
	I0729 03:45:08.824244    1912 system_pods.go:61] "coredns-7db6d8ff4d-r657x" [e6e7d731-c3e5-44de-843c-fa9855c5626c] Running
	I0729 03:45:08.824246    1912 system_pods.go:61] "etcd-functional-727000" [8291e549-9874-4c84-b689-815842a5507e] Running
	I0729 03:45:08.824248    1912 system_pods.go:61] "kube-apiserver-functional-727000" [c1ddc8bd-5789-4e2a-ae34-829caeee8e82] Running
	I0729 03:45:08.824250    1912 system_pods.go:61] "kube-controller-manager-functional-727000" [0759a4c0-b38a-42d2-a5c9-df7709dc2c6c] Running
	I0729 03:45:08.824251    1912 system_pods.go:61] "kube-proxy-qrmbh" [78023809-9d7d-474b-b719-42e294c40f10] Running
	I0729 03:45:08.824252    1912 system_pods.go:61] "kube-scheduler-functional-727000" [9349b682-319b-4080-82c6-0636a1b75dcf] Running
	I0729 03:45:08.824253    1912 system_pods.go:61] "storage-provisioner" [0aae2c8c-d0d5-4353-9079-12f26ea44af1] Running
	I0729 03:45:08.824255    1912 system_pods.go:74] duration metric: took 192.202334ms to wait for pod list to return data ...
	I0729 03:45:08.824257    1912 default_sa.go:34] waiting for default service account to be created ...
	I0729 03:45:09.022302    1912 default_sa.go:45] found service account: "default"
	I0729 03:45:09.022306    1912 default_sa.go:55] duration metric: took 198.048917ms for default service account to be created ...
	I0729 03:45:09.022308    1912 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 03:45:09.224046    1912 system_pods.go:86] 7 kube-system pods found
	I0729 03:45:09.224050    1912 system_pods.go:89] "coredns-7db6d8ff4d-r657x" [e6e7d731-c3e5-44de-843c-fa9855c5626c] Running
	I0729 03:45:09.224053    1912 system_pods.go:89] "etcd-functional-727000" [8291e549-9874-4c84-b689-815842a5507e] Running
	I0729 03:45:09.224055    1912 system_pods.go:89] "kube-apiserver-functional-727000" [c1ddc8bd-5789-4e2a-ae34-829caeee8e82] Running
	I0729 03:45:09.224056    1912 system_pods.go:89] "kube-controller-manager-functional-727000" [0759a4c0-b38a-42d2-a5c9-df7709dc2c6c] Running
	I0729 03:45:09.224057    1912 system_pods.go:89] "kube-proxy-qrmbh" [78023809-9d7d-474b-b719-42e294c40f10] Running
	I0729 03:45:09.224059    1912 system_pods.go:89] "kube-scheduler-functional-727000" [9349b682-319b-4080-82c6-0636a1b75dcf] Running
	I0729 03:45:09.224060    1912 system_pods.go:89] "storage-provisioner" [0aae2c8c-d0d5-4353-9079-12f26ea44af1] Running
	I0729 03:45:09.224062    1912 system_pods.go:126] duration metric: took 201.754667ms to wait for k8s-apps to be running ...
	I0729 03:45:09.224065    1912 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 03:45:09.224139    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 03:45:09.230752    1912 system_svc.go:56] duration metric: took 6.684042ms WaitForService to wait for kubelet
	I0729 03:45:09.230760    1912 kubeadm.go:582] duration metric: took 3.403289708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:45:09.230769    1912 node_conditions.go:102] verifying NodePressure condition ...
	I0729 03:45:09.422463    1912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 03:45:09.422467    1912 node_conditions.go:123] node cpu capacity is 2
	I0729 03:45:09.422473    1912 node_conditions.go:105] duration metric: took 191.70275ms to run NodePressure ...
	I0729 03:45:09.422477    1912 start.go:241] waiting for startup goroutines ...
	I0729 03:45:09.422480    1912 start.go:246] waiting for cluster config update ...
	I0729 03:45:09.422485    1912 start.go:255] writing updated cluster config ...
	I0729 03:45:09.422904    1912 ssh_runner.go:195] Run: rm -f paused
	I0729 03:45:09.453903    1912 start.go:600] kubectl: 1.29.2, cluster: 1.30.3 (minor skew: 1)
	I0729 03:45:09.457740    1912 out.go:177] * Done! kubectl is now configured to use "functional-727000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 29 10:45:49 functional-727000 dockerd[6064]: time="2024-07-29T10:45:49.899018590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 10:45:49 functional-727000 dockerd[6064]: time="2024-07-29T10:45:49.899056260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 10:45:49 functional-727000 dockerd[6064]: time="2024-07-29T10:45:49.899094263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 10:45:49 functional-727000 cri-dockerd[6323]: time="2024-07-29T10:45:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eaaefd361559b107763b9e2f15c23ff3174a29c4a1b8aa75d27bc1251c94e1d0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 29 10:45:50 functional-727000 cri-dockerd[6323]: time="2024-07-29T10:45:50Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jul 29 10:45:50 functional-727000 dockerd[6064]: time="2024-07-29T10:45:50.986438997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 10:45:50 functional-727000 dockerd[6064]: time="2024-07-29T10:45:50.986509461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 10:45:50 functional-727000 dockerd[6064]: time="2024-07-29T10:45:50.986519837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 10:45:50 functional-727000 dockerd[6064]: time="2024-07-29T10:45:50.986784149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 10:45:51 functional-727000 dockerd[6058]: time="2024-07-29T10:45:51.021431884Z" level=info msg="ignoring event" container=ee132d567870ccea31120d7a2f40cc2a4b84ba4fa2991fce3a8556b66437dc5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 10:45:51 functional-727000 dockerd[6064]: time="2024-07-29T10:45:51.021644317Z" level=info msg="shim disconnected" id=ee132d567870ccea31120d7a2f40cc2a4b84ba4fa2991fce3a8556b66437dc5b namespace=moby
	Jul 29 10:45:51 functional-727000 dockerd[6064]: time="2024-07-29T10:45:51.021673694Z" level=warning msg="cleaning up after shim disconnected" id=ee132d567870ccea31120d7a2f40cc2a4b84ba4fa2991fce3a8556b66437dc5b namespace=moby
	Jul 29 10:45:51 functional-727000 dockerd[6064]: time="2024-07-29T10:45:51.021678319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 10:45:52 functional-727000 dockerd[6058]: time="2024-07-29T10:45:52.677540021Z" level=info msg="ignoring event" container=eaaefd361559b107763b9e2f15c23ff3174a29c4a1b8aa75d27bc1251c94e1d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 10:45:52 functional-727000 dockerd[6064]: time="2024-07-29T10:45:52.677719785Z" level=info msg="shim disconnected" id=eaaefd361559b107763b9e2f15c23ff3174a29c4a1b8aa75d27bc1251c94e1d0 namespace=moby
	Jul 29 10:45:52 functional-727000 dockerd[6064]: time="2024-07-29T10:45:52.677747829Z" level=warning msg="cleaning up after shim disconnected" id=eaaefd361559b107763b9e2f15c23ff3174a29c4a1b8aa75d27bc1251c94e1d0 namespace=moby
	Jul 29 10:45:52 functional-727000 dockerd[6064]: time="2024-07-29T10:45:52.677753371Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 29 10:45:54 functional-727000 dockerd[6064]: time="2024-07-29T10:45:54.143130803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 10:45:54 functional-727000 dockerd[6064]: time="2024-07-29T10:45:54.143175973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 10:45:54 functional-727000 dockerd[6064]: time="2024-07-29T10:45:54.143181848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 10:45:54 functional-727000 dockerd[6064]: time="2024-07-29T10:45:54.143210600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 10:45:54 functional-727000 dockerd[6058]: time="2024-07-29T10:45:54.178531582Z" level=info msg="ignoring event" container=720a0182818fa206ab4b4fa461d6f43aab57dd960ecfcd999362e6ae48ad2ea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 29 10:45:54 functional-727000 dockerd[6064]: time="2024-07-29T10:45:54.178908943Z" level=info msg="shim disconnected" id=720a0182818fa206ab4b4fa461d6f43aab57dd960ecfcd999362e6ae48ad2ea2 namespace=moby
	Jul 29 10:45:54 functional-727000 dockerd[6064]: time="2024-07-29T10:45:54.178938570Z" level=warning msg="cleaning up after shim disconnected" id=720a0182818fa206ab4b4fa461d6f43aab57dd960ecfcd999362e6ae48ad2ea2 namespace=moby
	Jul 29 10:45:54 functional-727000 dockerd[6064]: time="2024-07-29T10:45:54.178944862Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	720a0182818fa       72565bf5bbedf                                                                                         1 second ago         Exited              echoserver-arm            2                   bb5c41be1d373       hello-node-65f5d5cc78-sdbq4
	ee132d567870c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 seconds ago        Exited              mount-munger              0                   eaaefd361559b       busybox-mount
	1c07a052be3f8       72565bf5bbedf                                                                                         9 seconds ago        Exited              echoserver-arm            2                   dccbbcf0ce5b9       hello-node-connect-6f49f58cd5-7nk7c
	3e5366698ae73       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         21 seconds ago       Running             myfrontend                0                   476ec04e70e58       sp-pod
	cc54ff40e1298       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         36 seconds ago       Running             nginx                     0                   0c176269a85e7       nginx-svc
	385eadab24caa       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   e95f6e07ee5fc       storage-provisioner
	8abe95790ad10       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   11435420daf98       coredns-7db6d8ff4d-r657x
	31bca3bfc5312       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   e95f6e07ee5fc       storage-provisioner
	d3cdc5bb32c2b       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   7dfbaeb8bba3e       kube-proxy-qrmbh
	846e2986a582e       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   d26b7df969b81       etcd-functional-727000
	3f13064588b76       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   93883bd5087b2       kube-controller-manager-functional-727000
	5fd8c8cd3c1cc       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   a53b5953af0e5       kube-scheduler-functional-727000
	caaad5f4e95f0       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   eb68a2cdbcda5       kube-apiserver-functional-727000
	ee639b072db10       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   f3a48175ff528       coredns-7db6d8ff4d-r657x
	6bb19568d73fd       2351f570ed0ea                                                                                         2 minutes ago        Exited              kube-proxy                1                   a7c87828c7cb3       kube-proxy-qrmbh
	e5ff4359d790c       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   9f86fab38730b       kube-scheduler-functional-727000
	b13162454ce38       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   28965a3f0baa3       kube-controller-manager-functional-727000
	314e3b4e109b9       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   acca8e9118461       etcd-functional-727000
	
	
	==> coredns [8abe95790ad1] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[6987723]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 10:44:25.614) (total time: 30001ms):
	Trace[6987723]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:44:55.615)
	Trace[6987723]: [30.001195846s] [30.001195846s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1619024528]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 10:44:25.614) (total time: 30001ms):
	Trace[1619024528]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:44:55.615)
	Trace[1619024528]: [30.001345486s] [30.001345486s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[48362210]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 10:44:25.614) (total time: 30001ms):
	Trace[48362210]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:44:55.615)
	Trace[48362210]: [30.001342231s] [30.001342231s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.1:46601 - 25701 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000098258s
	[INFO] 10.244.0.1:51567 - 61587 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000093507s
	[INFO] 10.244.0.1:32113 - 21704 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000024835s
	[INFO] 10.244.0.1:36386 - 44300 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.00152204s
	[INFO] 10.244.0.1:19254 - 21220 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000104133s
	[INFO] 10.244.0.1:5357 - 42754 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000134053s
	
	
	==> coredns [ee639b072db1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50127 - 56674 "HINFO IN 8159882244778552604.6608688885996853806. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008464567s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-727000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-727000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=functional-727000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T03_43_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:43:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-727000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:45:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:45:26 +0000   Mon, 29 Jul 2024 10:43:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:45:26 +0000   Mon, 29 Jul 2024 10:43:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:45:26 +0000   Mon, 29 Jul 2024 10:43:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:45:26 +0000   Mon, 29 Jul 2024 10:43:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-727000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 68ed8570b23e4705a7c37b4c7c80582a
	  System UUID:                68ed8570b23e4705a7c37b4c7c80582a
	  Boot ID:                    21451871-15a7-4797-850a-64e87680e840
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-sdbq4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  default                     hello-node-connect-6f49f58cd5-7nk7c          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 coredns-7db6d8ff4d-r657x                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m36s
	  kube-system                 etcd-functional-727000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m50s
	  kube-system                 kube-apiserver-functional-727000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-functional-727000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 kube-proxy-qrmbh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-scheduler-functional-727000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m35s                  kube-proxy       
	  Normal  Starting                 90s                    kube-proxy       
	  Normal  Starting                 2m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m54s                  kubelet          Node functional-727000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m50s                  kubelet          Node functional-727000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s                  kubelet          Node functional-727000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s                  kubelet          Node functional-727000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m47s                  kubelet          Node functional-727000 status is now: NodeReady
	  Normal  RegisteredNode           2m37s                  node-controller  Node functional-727000 event: Registered Node functional-727000 in Controller
	  Normal  NodeHasNoDiskPressure    2m18s (x8 over 2m18s)  kubelet          Node functional-727000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m18s (x8 over 2m18s)  kubelet          Node functional-727000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m18s (x7 over 2m18s)  kubelet          Node functional-727000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m2s                   node-controller  Node functional-727000 event: Registered Node functional-727000 in Controller
	  Normal  Starting                 94s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  94s (x8 over 94s)      kubelet          Node functional-727000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x8 over 94s)      kubelet          Node functional-727000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x7 over 94s)      kubelet          Node functional-727000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           79s                    node-controller  Node functional-727000 event: Registered Node functional-727000 in Controller
	
	
	==> dmesg <==
	[  +3.386879] kauditd_printk_skb: 199 callbacks suppressed
	[ +12.950662] kauditd_printk_skb: 33 callbacks suppressed
	[  +3.123392] systemd-fstab-generator[5147]: Ignoring "noauto" option for root device
	[Jul29 10:44] systemd-fstab-generator[5579]: Ignoring "noauto" option for root device
	[  +0.056255] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.100476] systemd-fstab-generator[5614]: Ignoring "noauto" option for root device
	[  +0.087185] systemd-fstab-generator[5626]: Ignoring "noauto" option for root device
	[  +0.128472] systemd-fstab-generator[5640]: Ignoring "noauto" option for root device
	[  +5.113244] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.280813] systemd-fstab-generator[6276]: Ignoring "noauto" option for root device
	[  +0.076593] systemd-fstab-generator[6288]: Ignoring "noauto" option for root device
	[  +0.077951] systemd-fstab-generator[6300]: Ignoring "noauto" option for root device
	[  +0.084782] systemd-fstab-generator[6315]: Ignoring "noauto" option for root device
	[  +0.213606] systemd-fstab-generator[6481]: Ignoring "noauto" option for root device
	[  +1.090784] systemd-fstab-generator[6605]: Ignoring "noauto" option for root device
	[  +3.455515] kauditd_printk_skb: 200 callbacks suppressed
	[ +11.662073] kauditd_printk_skb: 32 callbacks suppressed
	[Jul29 10:45] systemd-fstab-generator[7744]: Ignoring "noauto" option for root device
	[  +5.101220] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.467662] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.009943] kauditd_printk_skb: 25 callbacks suppressed
	[ +11.742489] kauditd_printk_skb: 20 callbacks suppressed
	[  +7.548994] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.416863] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.473138] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [314e3b4e109b] <==
	{"level":"info","ts":"2024-07-29T10:43:39.233817Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:43:40.819935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T10:43:40.820135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T10:43:40.820196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-29T10:43:40.820234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T10:43:40.820251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T10:43:40.8203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T10:43:40.820327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T10:43:40.822807Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T10:43:40.823461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T10:43:40.823904Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T10:43:40.823989Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T10:43:40.822797Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-727000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T10:43:40.828054Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-29T10:43:40.828066Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T10:44:08.156076Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T10:44:08.156101Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-727000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-29T10:44:08.15614Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T10:44:08.156184Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T10:44:08.166605Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T10:44:08.166633Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T10:44:08.166661Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-29T10:44:08.167506Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T10:44:08.167532Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T10:44:08.167535Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-727000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [846e2986a582] <==
	{"level":"info","ts":"2024-07-29T10:44:22.866915Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T10:44:22.866091Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T10:44:22.866144Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T10:44:22.868212Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T10:44:22.868239Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T10:44:22.866287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-29T10:44:22.866322Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T10:44:22.870228Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-29T10:44:22.872221Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-29T10:44:22.872369Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:44:22.872474Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:44:24.152464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T10:44:24.152611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T10:44:24.152685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-29T10:44:24.153047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T10:44:24.153094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-29T10:44:24.153128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-29T10:44:24.153147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-29T10:44:24.15756Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-727000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T10:44:24.157589Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T10:44:24.158439Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T10:44:24.1585Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T10:44:24.157619Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T10:44:24.162803Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T10:44:24.162803Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 10:45:56 up 3 min,  0 users,  load average: 0.52, 0.44, 0.19
	Linux functional-727000 5.10.207 #1 SMP PREEMPT Tue Jul 23 01:19:38 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [caaad5f4e95f] <==
	I0729 10:44:24.788765       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 10:44:24.788772       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 10:44:24.788822       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 10:44:24.789239       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 10:44:24.789270       1 aggregator.go:165] initial CRD sync complete...
	I0729 10:44:24.789274       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 10:44:24.789276       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 10:44:24.789286       1 cache.go:39] Caches are synced for autoregister controller
	I0729 10:44:24.789428       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 10:44:24.791330       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 10:44:24.810338       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 10:44:25.691113       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 10:44:25.793389       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0729 10:44:25.793954       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 10:44:25.795316       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 10:44:26.187067       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 10:44:26.190818       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 10:44:26.202997       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 10:44:26.227585       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 10:44:26.230902       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 10:45:10.956427       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.125.213"}
	I0729 10:45:16.424433       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.0.13"}
	I0729 10:45:26.779447       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 10:45:26.821494       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.92.69"}
	I0729 10:45:40.728859       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.75.250"}
	
	
	==> kube-controller-manager [3f13064588b7] <==
	I0729 10:44:37.068535       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 10:44:37.113024       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 10:44:37.116156       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 10:44:37.478176       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 10:44:37.515509       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 10:44:37.515519       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 10:45:05.371115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.843613ms"
	I0729 10:45:05.371625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.543µs"
	I0729 10:45:26.786613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="5.491322ms"
	I0729 10:45:26.790394       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="3.678008ms"
	I0729 10:45:26.790570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="20.669µs"
	I0729 10:45:26.790628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="8.751µs"
	I0729 10:45:26.795217       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="49.462µs"
	I0729 10:45:30.493321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="26.669µs"
	I0729 10:45:31.501996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="22.418µs"
	I0729 10:45:32.510678       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="23.002µs"
	I0729 10:45:40.695673       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="7.105941ms"
	I0729 10:45:40.700804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="5.02165ms"
	I0729 10:45:40.700930       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="22.002µs"
	I0729 10:45:40.702916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="11.209µs"
	I0729 10:45:41.558021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="21.335µs"
	I0729 10:45:42.565377       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="31.961µs"
	I0729 10:45:46.589913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="23.168µs"
	I0729 10:45:54.120564       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="25.96µs"
	I0729 10:45:54.636583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="24.543µs"
	
	
	==> kube-controller-manager [b13162454ce3] <==
	I0729 10:43:54.689249       1 shared_informer.go:320] Caches are synced for taint
	I0729 10:43:54.689272       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 10:43:54.689291       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-727000"
	I0729 10:43:54.689338       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 10:43:54.694932       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 10:43:54.694946       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 10:43:54.694949       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 10:43:54.694953       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0729 10:43:54.696028       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 10:43:54.704574       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 10:43:54.704613       1 shared_informer.go:320] Caches are synced for GC
	I0729 10:43:54.707936       1 shared_informer.go:320] Caches are synced for TTL
	I0729 10:43:54.799538       1 shared_informer.go:320] Caches are synced for deployment
	I0729 10:43:54.801728       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 10:43:54.803871       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 10:43:54.806033       1 shared_informer.go:320] Caches are synced for disruption
	I0729 10:43:54.807123       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 10:43:54.836246       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 10:43:54.837249       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 10:43:54.883988       1 shared_informer.go:320] Caches are synced for expand
	I0729 10:43:54.885113       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 10:43:54.885142       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 10:43:55.245093       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 10:43:55.298459       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 10:43:55.298497       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6bb19568d73f] <==
	I0729 10:43:42.106321       1 server_linux.go:69] "Using iptables proxy"
	I0729 10:43:42.113895       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0729 10:43:42.123696       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 10:43:42.123717       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 10:43:42.123726       1 server_linux.go:165] "Using iptables Proxier"
	I0729 10:43:42.124348       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 10:43:42.124482       1 server.go:872] "Version info" version="v1.30.3"
	I0729 10:43:42.124490       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:43:42.125076       1 config.go:192] "Starting service config controller"
	I0729 10:43:42.125080       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 10:43:42.125090       1 config.go:101] "Starting endpoint slice config controller"
	I0729 10:43:42.125092       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 10:43:42.125255       1 config.go:319] "Starting node config controller"
	I0729 10:43:42.125257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 10:43:42.226254       1 shared_informer.go:320] Caches are synced for service config
	I0729 10:43:42.226254       1 shared_informer.go:320] Caches are synced for node config
	I0729 10:43:42.226264       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d3cdc5bb32c2] <==
	I0729 10:44:25.619474       1 server_linux.go:69] "Using iptables proxy"
	I0729 10:44:25.623380       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0729 10:44:25.632161       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 10:44:25.632179       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 10:44:25.632187       1 server_linux.go:165] "Using iptables Proxier"
	I0729 10:44:25.632752       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 10:44:25.632828       1 server.go:872] "Version info" version="v1.30.3"
	I0729 10:44:25.632838       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:44:25.633376       1 config.go:192] "Starting service config controller"
	I0729 10:44:25.633383       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 10:44:25.633872       1 config.go:101] "Starting endpoint slice config controller"
	I0729 10:44:25.633881       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 10:44:25.634471       1 config.go:319] "Starting node config controller"
	I0729 10:44:25.634568       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 10:44:25.734385       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 10:44:25.734484       1 shared_informer.go:320] Caches are synced for service config
	I0729 10:44:25.734684       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5fd8c8cd3c1c] <==
	I0729 10:44:22.999864       1 serving.go:380] Generated self-signed cert in-memory
	W0729 10:44:24.712686       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 10:44:24.712781       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 10:44:24.712819       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 10:44:24.712841       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 10:44:24.726825       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 10:44:24.726914       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:44:24.727649       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 10:44:24.727705       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 10:44:24.727728       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 10:44:24.727762       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 10:44:24.828708       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e5ff4359d790] <==
	I0729 10:43:39.645705       1 serving.go:380] Generated self-signed cert in-memory
	W0729 10:43:41.386499       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 10:43:41.386516       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 10:43:41.386520       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 10:43:41.386534       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 10:43:41.403083       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 10:43:41.403097       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:43:41.403801       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 10:43:41.403843       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 10:43:41.403851       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 10:43:41.403858       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 10:43:41.503890       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 10:44:08.154362       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 10:44:08.154384       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 10:44:08.154462       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 10:44:08.154552       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 10:45:40 functional-727000 kubelet[6612]: I0729 10:45:40.694546    6612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=6.981319509 podStartE2EDuration="7.694530277s" podCreationTimestamp="2024-07-29 10:45:33 +0000 UTC" firstStartedPulling="2024-07-29 10:45:33.959159113 +0000 UTC m=+71.908801425" lastFinishedPulling="2024-07-29 10:45:34.672369882 +0000 UTC m=+72.622012193" observedRunningTime="2024-07-29 10:45:35.528662228 +0000 UTC m=+73.478304498" watchObservedRunningTime="2024-07-29 10:45:40.694530277 +0000 UTC m=+78.644172547"
	Jul 29 10:45:40 functional-727000 kubelet[6612]: I0729 10:45:40.694674    6612 topology_manager.go:215] "Topology Admit Handler" podUID="ed52e523-f795-4a0e-aeba-50edd874fcd2" podNamespace="default" podName="hello-node-65f5d5cc78-sdbq4"
	Jul 29 10:45:40 functional-727000 kubelet[6612]: I0729 10:45:40.823556    6612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjlr\" (UniqueName: \"kubernetes.io/projected/ed52e523-f795-4a0e-aeba-50edd874fcd2-kube-api-access-4fjlr\") pod \"hello-node-65f5d5cc78-sdbq4\" (UID: \"ed52e523-f795-4a0e-aeba-50edd874fcd2\") " pod="default/hello-node-65f5d5cc78-sdbq4"
	Jul 29 10:45:41 functional-727000 kubelet[6612]: I0729 10:45:41.552860    6612 scope.go:117] "RemoveContainer" containerID="c8bc11903981e189cd963948a3a3b8f606af4d4be0038fddc0b42d242eecc3b3"
	Jul 29 10:45:42 functional-727000 kubelet[6612]: I0729 10:45:42.560953    6612 scope.go:117] "RemoveContainer" containerID="c8bc11903981e189cd963948a3a3b8f606af4d4be0038fddc0b42d242eecc3b3"
	Jul 29 10:45:42 functional-727000 kubelet[6612]: I0729 10:45:42.561113    6612 scope.go:117] "RemoveContainer" containerID="246edf8a8ebe3863392cd276f4628add76d6d54ae94350893a69a1f9006b53f7"
	Jul 29 10:45:42 functional-727000 kubelet[6612]: E0729 10:45:42.561194    6612 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-sdbq4_default(ed52e523-f795-4a0e-aeba-50edd874fcd2)\"" pod="default/hello-node-65f5d5cc78-sdbq4" podUID="ed52e523-f795-4a0e-aeba-50edd874fcd2"
	Jul 29 10:45:46 functional-727000 kubelet[6612]: I0729 10:45:46.112225    6612 scope.go:117] "RemoveContainer" containerID="f45670801c9684066004442261c9cd02601bfabce74c9cc123b68741dd2c405a"
	Jul 29 10:45:46 functional-727000 kubelet[6612]: I0729 10:45:46.584527    6612 scope.go:117] "RemoveContainer" containerID="f45670801c9684066004442261c9cd02601bfabce74c9cc123b68741dd2c405a"
	Jul 29 10:45:46 functional-727000 kubelet[6612]: I0729 10:45:46.584699    6612 scope.go:117] "RemoveContainer" containerID="1c07a052be3f8df284ff27b1613ba8cf473fa443d870713b9c17a2969eb0427b"
	Jul 29 10:45:46 functional-727000 kubelet[6612]: E0729 10:45:46.584778    6612 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-7nk7c_default(e06c31ad-22bb-4ded-a004-7b082ecec62a)\"" pod="default/hello-node-connect-6f49f58cd5-7nk7c" podUID="e06c31ad-22bb-4ded-a004-7b082ecec62a"
	Jul 29 10:45:49 functional-727000 kubelet[6612]: I0729 10:45:49.556260    6612 topology_manager.go:215] "Topology Admit Handler" podUID="27684f43-5092-4b8b-a84f-4a3bdb2a8df4" podNamespace="default" podName="busybox-mount"
	Jul 29 10:45:49 functional-727000 kubelet[6612]: I0729 10:45:49.683135    6612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/27684f43-5092-4b8b-a84f-4a3bdb2a8df4-test-volume\") pod \"busybox-mount\" (UID: \"27684f43-5092-4b8b-a84f-4a3bdb2a8df4\") " pod="default/busybox-mount"
	Jul 29 10:45:49 functional-727000 kubelet[6612]: I0729 10:45:49.683159    6612 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svqfz\" (UniqueName: \"kubernetes.io/projected/27684f43-5092-4b8b-a84f-4a3bdb2a8df4-kube-api-access-svqfz\") pod \"busybox-mount\" (UID: \"27684f43-5092-4b8b-a84f-4a3bdb2a8df4\") " pod="default/busybox-mount"
	Jul 29 10:45:52 functional-727000 kubelet[6612]: I0729 10:45:52.806004    6612 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/27684f43-5092-4b8b-a84f-4a3bdb2a8df4-test-volume\") pod \"27684f43-5092-4b8b-a84f-4a3bdb2a8df4\" (UID: \"27684f43-5092-4b8b-a84f-4a3bdb2a8df4\") "
	Jul 29 10:45:52 functional-727000 kubelet[6612]: I0729 10:45:52.806031    6612 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svqfz\" (UniqueName: \"kubernetes.io/projected/27684f43-5092-4b8b-a84f-4a3bdb2a8df4-kube-api-access-svqfz\") pod \"27684f43-5092-4b8b-a84f-4a3bdb2a8df4\" (UID: \"27684f43-5092-4b8b-a84f-4a3bdb2a8df4\") "
	Jul 29 10:45:52 functional-727000 kubelet[6612]: I0729 10:45:52.806224    6612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27684f43-5092-4b8b-a84f-4a3bdb2a8df4-test-volume" (OuterVolumeSpecName: "test-volume") pod "27684f43-5092-4b8b-a84f-4a3bdb2a8df4" (UID: "27684f43-5092-4b8b-a84f-4a3bdb2a8df4"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 29 10:45:52 functional-727000 kubelet[6612]: I0729 10:45:52.808709    6612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27684f43-5092-4b8b-a84f-4a3bdb2a8df4-kube-api-access-svqfz" (OuterVolumeSpecName: "kube-api-access-svqfz") pod "27684f43-5092-4b8b-a84f-4a3bdb2a8df4" (UID: "27684f43-5092-4b8b-a84f-4a3bdb2a8df4"). InnerVolumeSpecName "kube-api-access-svqfz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 10:45:52 functional-727000 kubelet[6612]: I0729 10:45:52.906427    6612 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/27684f43-5092-4b8b-a84f-4a3bdb2a8df4-test-volume\") on node \"functional-727000\" DevicePath \"\""
	Jul 29 10:45:52 functional-727000 kubelet[6612]: I0729 10:45:52.906439    6612 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-svqfz\" (UniqueName: \"kubernetes.io/projected/27684f43-5092-4b8b-a84f-4a3bdb2a8df4-kube-api-access-svqfz\") on node \"functional-727000\" DevicePath \"\""
	Jul 29 10:45:53 functional-727000 kubelet[6612]: I0729 10:45:53.623965    6612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaaefd361559b107763b9e2f15c23ff3174a29c4a1b8aa75d27bc1251c94e1d0"
	Jul 29 10:45:54 functional-727000 kubelet[6612]: I0729 10:45:54.113856    6612 scope.go:117] "RemoveContainer" containerID="246edf8a8ebe3863392cd276f4628add76d6d54ae94350893a69a1f9006b53f7"
	Jul 29 10:45:54 functional-727000 kubelet[6612]: I0729 10:45:54.631484    6612 scope.go:117] "RemoveContainer" containerID="246edf8a8ebe3863392cd276f4628add76d6d54ae94350893a69a1f9006b53f7"
	Jul 29 10:45:54 functional-727000 kubelet[6612]: I0729 10:45:54.631641    6612 scope.go:117] "RemoveContainer" containerID="720a0182818fa206ab4b4fa461d6f43aab57dd960ecfcd999362e6ae48ad2ea2"
	Jul 29 10:45:54 functional-727000 kubelet[6612]: E0729 10:45:54.631729    6612 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-sdbq4_default(ed52e523-f795-4a0e-aeba-50edd874fcd2)\"" pod="default/hello-node-65f5d5cc78-sdbq4" podUID="ed52e523-f795-4a0e-aeba-50edd874fcd2"
	
	
	==> storage-provisioner [31bca3bfc531] <==
	I0729 10:44:25.585198       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 10:44:25.586063       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [385eadab24ca] <==
	I0729 10:44:40.180649       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 10:44:40.184012       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 10:44:40.184071       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 10:44:57.568982       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 10:44:57.569238       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d684390-0cfa-43d6-b3dc-5db12ccc8c17", APIVersion:"v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-727000_24e6ecb3-fd37-4cbf-8dfa-813ab907be32 became leader
	I0729 10:44:57.569271       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-727000_24e6ecb3-fd37-4cbf-8dfa-813ab907be32!
	I0729 10:44:57.670318       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-727000_24e6ecb3-fd37-4cbf-8dfa-813ab907be32!
	I0729 10:45:20.942299       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0729 10:45:20.942680       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"78bbf2e5-1289-485e-aed6-92d267298027", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0729 10:45:20.942428       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    ce76abe8-c2f8-4f0f-80c0-1315e32b06ef 359 0 2024-07-29 10:43:20 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-29 10:43:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-78bbf2e5-1289-485e-aed6-92d267298027 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  78bbf2e5-1289-485e-aed6-92d267298027 683 0 2024-07-29 10:45:20 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-29 10:45:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-29 10:45:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0729 10:45:20.943369       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-78bbf2e5-1289-485e-aed6-92d267298027" provisioned
	I0729 10:45:20.943469       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0729 10:45:20.943479       1 volume_store.go:212] Trying to save persistentvolume "pvc-78bbf2e5-1289-485e-aed6-92d267298027"
	I0729 10:45:20.947321       1 volume_store.go:219] persistentvolume "pvc-78bbf2e5-1289-485e-aed6-92d267298027" saved
	I0729 10:45:20.947887       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"78bbf2e5-1289-485e-aed6-92d267298027", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-78bbf2e5-1289-485e-aed6-92d267298027
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-727000 -n functional-727000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-727000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-727000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-727000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-727000/192.168.105.4
	Start Time:       Mon, 29 Jul 2024 03:45:49 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://ee132d567870ccea31120d7a2f40cc2a4b84ba4fa2991fce3a8556b66437dc5b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Jul 2024 03:45:50 -0700
	      Finished:     Mon, 29 Jul 2024 03:45:51 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svqfz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-svqfz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/busybox-mount to functional-727000
	  Normal  Pulling    7s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.01s (1.01s including waiting). Image size: 3547125 bytes.
	  Normal  Created    6s    kubelet            Created container mount-munger
	  Normal  Started    5s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (29.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 node stop m02 -v=7 --alsologtostderr
E0729 03:50:35.329145    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-714000 node stop m02 -v=7 --alsologtostderr: (12.192730584s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr
E0729 03:50:55.811077    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:51:36.772389    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:52:58.693025    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:53:20.129063    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr: exit status 7 (2m55.966903333s)

                                                
                                                
-- stdout --
	ha-714000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-714000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-714000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:50:41.415306    2560 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:50:41.415467    2560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:50:41.415471    2560 out.go:304] Setting ErrFile to fd 2...
	I0729 03:50:41.415473    2560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:50:41.415622    2560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 03:50:41.415741    2560 out.go:298] Setting JSON to false
	I0729 03:50:41.415756    2560 mustload.go:65] Loading cluster: ha-714000
	I0729 03:50:41.415814    2560 notify.go:220] Checking for updates...
	I0729 03:50:41.415983    2560 config.go:182] Loaded profile config "ha-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:50:41.415993    2560 status.go:255] checking status of ha-714000 ...
	I0729 03:50:41.416845    2560 status.go:330] ha-714000 host status = "Running" (err=<nil>)
	I0729 03:50:41.416855    2560 host.go:66] Checking if "ha-714000" exists ...
	I0729 03:50:41.416958    2560 host.go:66] Checking if "ha-714000" exists ...
	I0729 03:50:41.417073    2560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 03:50:41.417083    2560 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/id_rsa Username:docker}
	W0729 03:51:07.341612    2560 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0729 03:51:07.341679    2560 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 03:51:07.341688    2560 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 03:51:07.341692    2560 status.go:257] ha-714000 status: &{Name:ha-714000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 03:51:07.341702    2560 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 03:51:07.341706    2560 status.go:255] checking status of ha-714000-m02 ...
	I0729 03:51:07.341924    2560 status.go:330] ha-714000-m02 host status = "Stopped" (err=<nil>)
	I0729 03:51:07.341931    2560 status.go:343] host is not running, skipping remaining checks
	I0729 03:51:07.341933    2560 status.go:257] ha-714000-m02 status: &{Name:ha-714000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 03:51:07.341937    2560 status.go:255] checking status of ha-714000-m03 ...
	I0729 03:51:07.342985    2560 status.go:330] ha-714000-m03 host status = "Running" (err=<nil>)
	I0729 03:51:07.343000    2560 host.go:66] Checking if "ha-714000-m03" exists ...
	I0729 03:51:07.343319    2560 host.go:66] Checking if "ha-714000-m03" exists ...
	I0729 03:51:07.343603    2560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 03:51:07.343619    2560 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m03/id_rsa Username:docker}
	W0729 03:52:22.344402    2560 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0729 03:52:22.344463    2560 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0729 03:52:22.344474    2560 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 03:52:22.344478    2560 status.go:257] ha-714000-m03 status: &{Name:ha-714000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 03:52:22.344492    2560 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 03:52:22.344496    2560 status.go:255] checking status of ha-714000-m04 ...
	I0729 03:52:22.345206    2560 status.go:330] ha-714000-m04 host status = "Running" (err=<nil>)
	I0729 03:52:22.345213    2560 host.go:66] Checking if "ha-714000-m04" exists ...
	I0729 03:52:22.345302    2560 host.go:66] Checking if "ha-714000-m04" exists ...
	I0729 03:52:22.345424    2560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 03:52:22.345429    2560 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m04/id_rsa Username:docker}
	W0729 03:53:37.346778    2560 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0729 03:53:37.346838    2560 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0729 03:53:37.346848    2560 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0729 03:53:37.346852    2560 status.go:257] ha-714000-m04 status: &{Name:ha-714000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 03:53:37.346861    2560 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr": ha-714000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-714000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-714000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr": ha-714000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-714000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-714000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr": ha-714000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-714000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-714000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000: exit status 3 (25.953586625s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 03:54:03.300366    2596 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 03:54:03.300385    2596 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-714000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0729 03:55:14.823969    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m16.88759925s)
ha_test.go:413: expected profile "ha-714000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-714000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-714000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-714000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000
E0729 03:55:42.532519    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000: exit status 3 (25.963964125s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 03:55:46.146107    2620 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 03:55:46.146151    2620 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-714000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-714000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.119470583s)

                                                
                                                
-- stdout --
	* Starting "ha-714000-m02" control-plane node in "ha-714000" cluster
	* Restarting existing qemu2 VM for "ha-714000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-714000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:55:46.220518    2627 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:55:46.220867    2627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:55:46.220872    2627 out.go:304] Setting ErrFile to fd 2...
	I0729 03:55:46.220875    2627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:55:46.221054    2627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 03:55:46.221365    2627 mustload.go:65] Loading cluster: ha-714000
	I0729 03:55:46.221677    2627 config.go:182] Loaded profile config "ha-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 03:55:46.221979    2627 host.go:58] "ha-714000-m02" host status: Stopped
	I0729 03:55:46.226333    2627 out.go:177] * Starting "ha-714000-m02" control-plane node in "ha-714000" cluster
	I0729 03:55:46.230300    2627 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:55:46.230317    2627 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:55:46.230325    2627 cache.go:56] Caching tarball of preloaded images
	I0729 03:55:46.230402    2627 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:55:46.230408    2627 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:55:46.230484    2627 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/ha-714000/config.json ...
	I0729 03:55:46.230892    2627 start.go:360] acquireMachinesLock for ha-714000-m02: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:55:46.230937    2627 start.go:364] duration metric: took 31.167µs to acquireMachinesLock for "ha-714000-m02"
	I0729 03:55:46.230947    2627 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:55:46.230954    2627 fix.go:54] fixHost starting: m02
	I0729 03:55:46.231108    2627 fix.go:112] recreateIfNeeded on ha-714000-m02: state=Stopped err=<nil>
	W0729 03:55:46.231116    2627 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:55:46.234309    2627 out.go:177] * Restarting existing qemu2 VM for "ha-714000-m02" ...
	I0729 03:55:46.238292    2627 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:55:46.238344    2627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:73:64:74:e6:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/disk.qcow2
	I0729 03:55:46.241385    2627 main.go:141] libmachine: STDOUT: 
	I0729 03:55:46.241406    2627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:55:46.241430    2627 fix.go:56] duration metric: took 10.4765ms for fixHost
	I0729 03:55:46.241435    2627 start.go:83] releasing machines lock for "ha-714000-m02", held for 10.493959ms
	W0729 03:55:46.241443    2627 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:55:46.241471    2627 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:55:46.241476    2627 start.go:729] Will try again in 5 seconds ...
	I0729 03:55:51.243089    2627 start.go:360] acquireMachinesLock for ha-714000-m02: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:55:51.243228    2627 start.go:364] duration metric: took 106.791µs to acquireMachinesLock for "ha-714000-m02"
	I0729 03:55:51.243262    2627 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:55:51.243266    2627 fix.go:54] fixHost starting: m02
	I0729 03:55:51.243432    2627 fix.go:112] recreateIfNeeded on ha-714000-m02: state=Stopped err=<nil>
	W0729 03:55:51.243436    2627 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:55:51.247665    2627 out.go:177] * Restarting existing qemu2 VM for "ha-714000-m02" ...
	I0729 03:55:51.250700    2627 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:55:51.250754    2627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:73:64:74:e6:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/disk.qcow2
	I0729 03:55:51.252885    2627 main.go:141] libmachine: STDOUT: 
	I0729 03:55:51.252904    2627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:55:51.252929    2627 fix.go:56] duration metric: took 9.663041ms for fixHost
	I0729 03:55:51.252932    2627 start.go:83] releasing machines lock for "ha-714000-m02", held for 9.698917ms
	W0729 03:55:51.252970    2627 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-714000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-714000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:55:51.256760    2627 out.go:177] 
	W0729 03:55:51.260702    2627 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:55:51.260708    2627 out.go:239] * 
	* 
	W0729 03:55:51.262542    2627 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:55:51.266842    2627 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0729 03:55:46.220518    2627 out.go:291] Setting OutFile to fd 1 ...
I0729 03:55:46.220867    2627 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:55:46.220872    2627 out.go:304] Setting ErrFile to fd 2...
I0729 03:55:46.220875    2627 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:55:46.221054    2627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
I0729 03:55:46.221365    2627 mustload.go:65] Loading cluster: ha-714000
I0729 03:55:46.221677    2627 config.go:182] Loaded profile config "ha-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0729 03:55:46.221979    2627 host.go:58] "ha-714000-m02" host status: Stopped
I0729 03:55:46.226333    2627 out.go:177] * Starting "ha-714000-m02" control-plane node in "ha-714000" cluster
I0729 03:55:46.230300    2627 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 03:55:46.230317    2627 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 03:55:46.230325    2627 cache.go:56] Caching tarball of preloaded images
I0729 03:55:46.230402    2627 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 03:55:46.230408    2627 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 03:55:46.230484    2627 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/ha-714000/config.json ...
I0729 03:55:46.230892    2627 start.go:360] acquireMachinesLock for ha-714000-m02: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 03:55:46.230937    2627 start.go:364] duration metric: took 31.167µs to acquireMachinesLock for "ha-714000-m02"
I0729 03:55:46.230947    2627 start.go:96] Skipping create...Using existing machine configuration
I0729 03:55:46.230954    2627 fix.go:54] fixHost starting: m02
I0729 03:55:46.231108    2627 fix.go:112] recreateIfNeeded on ha-714000-m02: state=Stopped err=<nil>
W0729 03:55:46.231116    2627 fix.go:138] unexpected machine state, will restart: <nil>
I0729 03:55:46.234309    2627 out.go:177] * Restarting existing qemu2 VM for "ha-714000-m02" ...
I0729 03:55:46.238292    2627 qemu.go:418] Using hvf for hardware acceleration
I0729 03:55:46.238344    2627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:73:64:74:e6:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/disk.qcow2
I0729 03:55:46.241385    2627 main.go:141] libmachine: STDOUT: 
I0729 03:55:46.241406    2627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 03:55:46.241430    2627 fix.go:56] duration metric: took 10.4765ms for fixHost
I0729 03:55:46.241435    2627 start.go:83] releasing machines lock for "ha-714000-m02", held for 10.493959ms
W0729 03:55:46.241443    2627 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 03:55:46.241471    2627 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 03:55:46.241476    2627 start.go:729] Will try again in 5 seconds ...
I0729 03:55:51.243089    2627 start.go:360] acquireMachinesLock for ha-714000-m02: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 03:55:51.243228    2627 start.go:364] duration metric: took 106.791µs to acquireMachinesLock for "ha-714000-m02"
I0729 03:55:51.243262    2627 start.go:96] Skipping create...Using existing machine configuration
I0729 03:55:51.243266    2627 fix.go:54] fixHost starting: m02
I0729 03:55:51.243432    2627 fix.go:112] recreateIfNeeded on ha-714000-m02: state=Stopped err=<nil>
W0729 03:55:51.243436    2627 fix.go:138] unexpected machine state, will restart: <nil>
I0729 03:55:51.247665    2627 out.go:177] * Restarting existing qemu2 VM for "ha-714000-m02" ...
I0729 03:55:51.250700    2627 qemu.go:418] Using hvf for hardware acceleration
I0729 03:55:51.250754    2627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:73:64:74:e6:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m02/disk.qcow2
I0729 03:55:51.252885    2627 main.go:141] libmachine: STDOUT: 
I0729 03:55:51.252904    2627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 03:55:51.252929    2627 fix.go:56] duration metric: took 9.663041ms for fixHost
I0729 03:55:51.252932    2627 start.go:83] releasing machines lock for "ha-714000-m02", held for 9.698917ms
W0729 03:55:51.252970    2627 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-714000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-714000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 03:55:51.256760    2627 out.go:177] 
W0729 03:55:51.260702    2627 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 03:55:51.260708    2627 out.go:239] * 
* 
W0729 03:55:51.262542    2627 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 03:55:51.266842    2627 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-714000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr
E0729 03:58:20.123424    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr: exit status 7 (2m57.283465458s)

                                                
                                                
-- stdout --
	ha-714000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-714000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-714000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:55:51.301082    2631 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:55:51.301251    2631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:55:51.301254    2631 out.go:304] Setting ErrFile to fd 2...
	I0729 03:55:51.301256    2631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:55:51.301390    2631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 03:55:51.301518    2631 out.go:298] Setting JSON to false
	I0729 03:55:51.301534    2631 mustload.go:65] Loading cluster: ha-714000
	I0729 03:55:51.301614    2631 notify.go:220] Checking for updates...
	I0729 03:55:51.301791    2631 config.go:182] Loaded profile config "ha-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:55:51.301799    2631 status.go:255] checking status of ha-714000 ...
	I0729 03:55:51.302540    2631 status.go:330] ha-714000 host status = "Running" (err=<nil>)
	I0729 03:55:51.302552    2631 host.go:66] Checking if "ha-714000" exists ...
	I0729 03:55:51.302654    2631 host.go:66] Checking if "ha-714000" exists ...
	I0729 03:55:51.302772    2631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 03:55:51.302781    2631 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/id_rsa Username:docker}
	W0729 03:55:51.302962    2631 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 03:55:51.302974    2631 retry.go:31] will retry after 204.618971ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 03:55:51.509750    2631 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 03:55:51.509769    2631 retry.go:31] will retry after 293.217331ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 03:55:51.805247    2631 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0729 03:55:51.805274    2631 retry.go:31] will retry after 819.062135ms: dial tcp 192.168.105.5:22: connect: host is down
	W0729 03:56:18.546670    2631 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0729 03:56:18.546735    2631 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 03:56:18.546747    2631 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 03:56:18.546757    2631 status.go:257] ha-714000 status: &{Name:ha-714000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 03:56:18.546767    2631 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0729 03:56:18.546771    2631 status.go:255] checking status of ha-714000-m02 ...
	I0729 03:56:18.546970    2631 status.go:330] ha-714000-m02 host status = "Stopped" (err=<nil>)
	I0729 03:56:18.546976    2631 status.go:343] host is not running, skipping remaining checks
	I0729 03:56:18.546978    2631 status.go:257] ha-714000-m02 status: &{Name:ha-714000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 03:56:18.546985    2631 status.go:255] checking status of ha-714000-m03 ...
	I0729 03:56:18.547601    2631 status.go:330] ha-714000-m03 host status = "Running" (err=<nil>)
	I0729 03:56:18.547611    2631 host.go:66] Checking if "ha-714000-m03" exists ...
	I0729 03:56:18.547721    2631 host.go:66] Checking if "ha-714000-m03" exists ...
	I0729 03:56:18.547862    2631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 03:56:18.547868    2631 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m03/id_rsa Username:docker}
	W0729 03:57:33.548465    2631 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0729 03:57:33.548506    2631 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0729 03:57:33.548515    2631 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 03:57:33.548519    2631 status.go:257] ha-714000-m03 status: &{Name:ha-714000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 03:57:33.548528    2631 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0729 03:57:33.548538    2631 status.go:255] checking status of ha-714000-m04 ...
	I0729 03:57:33.549233    2631 status.go:330] ha-714000-m04 host status = "Running" (err=<nil>)
	I0729 03:57:33.549241    2631 host.go:66] Checking if "ha-714000-m04" exists ...
	I0729 03:57:33.549350    2631 host.go:66] Checking if "ha-714000-m04" exists ...
	I0729 03:57:33.549480    2631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 03:57:33.549486    2631 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000-m04/id_rsa Username:docker}
	W0729 03:58:48.550796    2631 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0729 03:58:48.550856    2631 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0729 03:58:48.550870    2631 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0729 03:58:48.550874    2631 status.go:257] ha-714000-m04 status: &{Name:ha-714000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 03:58:48.550884    2631 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000: exit status 3 (25.956814916s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 03:59:14.507193    2655 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0729 03:59:14.507209    2655 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-714000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-714000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-714000 -v=7 --alsologtostderr
E0729 04:03:20.117754    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-714000 -v=7 --alsologtostderr: (3m49.005306542s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-714000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-714000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.218443416s)

                                                
                                                
-- stdout --
	* [ha-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-714000" primary control-plane node in "ha-714000" cluster
	* Restarting existing qemu2 VM for "ha-714000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-714000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:04:23.367562    3019 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:04:23.367772    3019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:04:23.367777    3019 out.go:304] Setting ErrFile to fd 2...
	I0729 04:04:23.367780    3019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:04:23.367971    3019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:04:23.369236    3019 out.go:298] Setting JSON to false
	I0729 04:04:23.389248    3019 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2026,"bootTime":1722249037,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:04:23.389334    3019 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:04:23.393466    3019 out.go:177] * [ha-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:04:23.402224    3019 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:04:23.402286    3019 notify.go:220] Checking for updates...
	I0729 04:04:23.410143    3019 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:04:23.413134    3019 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:04:23.416163    3019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:04:23.419171    3019 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:04:23.420469    3019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:04:23.423432    3019 config.go:182] Loaded profile config "ha-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:04:23.423509    3019 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:04:23.428175    3019 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:04:23.433090    3019 start.go:297] selected driver: qemu2
	I0729 04:04:23.433104    3019 start.go:901] validating driver "qemu2" against &{Name:ha-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-714000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:04:23.433186    3019 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:04:23.435847    3019 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:04:23.435891    3019 cni.go:84] Creating CNI manager for ""
	I0729 04:04:23.435896    3019 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 04:04:23.435943    3019 start.go:340] cluster config:
	{Name:ha-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-714000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:04:23.440087    3019 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:04:23.448123    3019 out.go:177] * Starting "ha-714000" primary control-plane node in "ha-714000" cluster
	I0729 04:04:23.452135    3019 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:04:23.452153    3019 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:04:23.452162    3019 cache.go:56] Caching tarball of preloaded images
	I0729 04:04:23.452220    3019 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:04:23.452227    3019 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:04:23.452297    3019 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/ha-714000/config.json ...
	I0729 04:04:23.452757    3019 start.go:360] acquireMachinesLock for ha-714000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:04:23.452793    3019 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "ha-714000"
	I0729 04:04:23.452804    3019 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:04:23.452808    3019 fix.go:54] fixHost starting: 
	I0729 04:04:23.452930    3019 fix.go:112] recreateIfNeeded on ha-714000: state=Stopped err=<nil>
	W0729 04:04:23.452937    3019 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:04:23.456176    3019 out.go:177] * Restarting existing qemu2 VM for "ha-714000" ...
	I0729 04:04:23.464200    3019 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:04:23.464249    3019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:97:2e:e0:45:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/disk.qcow2
	I0729 04:04:23.466411    3019 main.go:141] libmachine: STDOUT: 
	I0729 04:04:23.466430    3019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:04:23.466461    3019 fix.go:56] duration metric: took 13.651375ms for fixHost
	I0729 04:04:23.466465    3019 start.go:83] releasing machines lock for "ha-714000", held for 13.667541ms
	W0729 04:04:23.466472    3019 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:04:23.466506    3019 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:04:23.466511    3019 start.go:729] Will try again in 5 seconds ...
	I0729 04:04:28.468577    3019 start.go:360] acquireMachinesLock for ha-714000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:04:28.468945    3019 start.go:364] duration metric: took 287.25µs to acquireMachinesLock for "ha-714000"
	I0729 04:04:28.469073    3019 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:04:28.469091    3019 fix.go:54] fixHost starting: 
	I0729 04:04:28.469780    3019 fix.go:112] recreateIfNeeded on ha-714000: state=Stopped err=<nil>
	W0729 04:04:28.469806    3019 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:04:28.474426    3019 out.go:177] * Restarting existing qemu2 VM for "ha-714000" ...
	I0729 04:04:28.482223    3019 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:04:28.482472    3019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:97:2e:e0:45:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/disk.qcow2
	I0729 04:04:28.491318    3019 main.go:141] libmachine: STDOUT: 
	I0729 04:04:28.491372    3019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:04:28.491436    3019 fix.go:56] duration metric: took 22.3455ms for fixHost
	I0729 04:04:28.491453    3019 start.go:83] releasing machines lock for "ha-714000", held for 22.486042ms
	W0729 04:04:28.491632    3019 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-714000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-714000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:04:28.499160    3019 out.go:177] 
	W0729 04:04:28.503244    3019 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:04:28.503297    3019 out.go:239] * 
	* 
	W0729 04:04:28.505749    3019 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:04:28.515166    3019 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-714000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-714000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000: exit status 7 (32.550875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-714000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-714000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.862166ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-714000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-714000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:04:28.654192    3032 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:04:28.654439    3032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:04:28.654445    3032 out.go:304] Setting ErrFile to fd 2...
	I0729 04:04:28.654448    3032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:04:28.654603    3032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:04:28.654837    3032 mustload.go:65] Loading cluster: ha-714000
	I0729 04:04:28.655039    3032 config.go:182] Loaded profile config "ha-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 04:04:28.655347    3032 out.go:239] ! The control-plane node ha-714000 host is not running (will try others): state=Stopped
	! The control-plane node ha-714000 host is not running (will try others): state=Stopped
	W0729 04:04:28.655448    3032 out.go:239] ! The control-plane node ha-714000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-714000-m02 host is not running (will try others): state=Stopped
	I0729 04:04:28.659924    3032 out.go:177] * The control-plane node ha-714000-m03 host is not running: state=Stopped
	I0729 04:04:28.662989    3032 out.go:177]   To start a cluster, run: "minikube start -p ha-714000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-714000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr: exit status 7 (30.03675ms)

                                                
                                                
-- stdout --
	ha-714000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:04:28.694867    3034 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:04:28.694999    3034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:04:28.695003    3034 out.go:304] Setting ErrFile to fd 2...
	I0729 04:04:28.695005    3034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:04:28.695121    3034 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:04:28.695254    3034 out.go:298] Setting JSON to false
	I0729 04:04:28.695263    3034 mustload.go:65] Loading cluster: ha-714000
	I0729 04:04:28.695326    3034 notify.go:220] Checking for updates...
	I0729 04:04:28.695472    3034 config.go:182] Loaded profile config "ha-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:04:28.695479    3034 status.go:255] checking status of ha-714000 ...
	I0729 04:04:28.695710    3034 status.go:330] ha-714000 host status = "Stopped" (err=<nil>)
	I0729 04:04:28.695714    3034 status.go:343] host is not running, skipping remaining checks
	I0729 04:04:28.695717    3034 status.go:257] ha-714000 status: &{Name:ha-714000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 04:04:28.695727    3034 status.go:255] checking status of ha-714000-m02 ...
	I0729 04:04:28.695816    3034 status.go:330] ha-714000-m02 host status = "Stopped" (err=<nil>)
	I0729 04:04:28.695818    3034 status.go:343] host is not running, skipping remaining checks
	I0729 04:04:28.695820    3034 status.go:257] ha-714000-m02 status: &{Name:ha-714000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 04:04:28.695828    3034 status.go:255] checking status of ha-714000-m03 ...
	I0729 04:04:28.695922    3034 status.go:330] ha-714000-m03 host status = "Stopped" (err=<nil>)
	I0729 04:04:28.695924    3034 status.go:343] host is not running, skipping remaining checks
	I0729 04:04:28.695926    3034 status.go:257] ha-714000-m03 status: &{Name:ha-714000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 04:04:28.695930    3034 status.go:255] checking status of ha-714000-m04 ...
	I0729 04:04:28.696021    3034 status.go:330] ha-714000-m04 host status = "Stopped" (err=<nil>)
	I0729 04:04:28.696024    3034 status.go:343] host is not running, skipping remaining checks
	I0729 04:04:28.696026    3034 status.go:257] ha-714000-m04 status: &{Name:ha-714000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000: exit status 7 (29.234417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-714000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-714000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-714000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-714000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-714000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000: exit status 7 (28.626042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-714000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 stop -v=7 --alsologtostderr
E0729 04:05:14.814377    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 04:06:37.882565    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-714000 stop -v=7 --alsologtostderr: (3m21.972296s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr: exit status 7 (67.187875ms)

                                                
                                                
-- stdout --
	ha-714000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-714000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:07:50.778072    3079 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:07:50.778264    3079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:07:50.778269    3079 out.go:304] Setting ErrFile to fd 2...
	I0729 04:07:50.778271    3079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:07:50.778449    3079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:07:50.778622    3079 out.go:298] Setting JSON to false
	I0729 04:07:50.778634    3079 mustload.go:65] Loading cluster: ha-714000
	I0729 04:07:50.778681    3079 notify.go:220] Checking for updates...
	I0729 04:07:50.778936    3079 config.go:182] Loaded profile config "ha-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:07:50.778944    3079 status.go:255] checking status of ha-714000 ...
	I0729 04:07:50.779219    3079 status.go:330] ha-714000 host status = "Stopped" (err=<nil>)
	I0729 04:07:50.779223    3079 status.go:343] host is not running, skipping remaining checks
	I0729 04:07:50.779226    3079 status.go:257] ha-714000 status: &{Name:ha-714000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 04:07:50.779239    3079 status.go:255] checking status of ha-714000-m02 ...
	I0729 04:07:50.779377    3079 status.go:330] ha-714000-m02 host status = "Stopped" (err=<nil>)
	I0729 04:07:50.779381    3079 status.go:343] host is not running, skipping remaining checks
	I0729 04:07:50.779383    3079 status.go:257] ha-714000-m02 status: &{Name:ha-714000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 04:07:50.779389    3079 status.go:255] checking status of ha-714000-m03 ...
	I0729 04:07:50.779517    3079 status.go:330] ha-714000-m03 host status = "Stopped" (err=<nil>)
	I0729 04:07:50.779522    3079 status.go:343] host is not running, skipping remaining checks
	I0729 04:07:50.779525    3079 status.go:257] ha-714000-m03 status: &{Name:ha-714000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 04:07:50.779530    3079 status.go:255] checking status of ha-714000-m04 ...
	I0729 04:07:50.779651    3079 status.go:330] ha-714000-m04 host status = "Stopped" (err=<nil>)
	I0729 04:07:50.779656    3079 status.go:343] host is not running, skipping remaining checks
	I0729 04:07:50.779659    3079 status.go:257] ha-714000-m04 status: &{Name:ha-714000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr": ha-714000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr": ha-714000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr": ha-714000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-714000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000: exit status 7 (31.997708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-714000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-714000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-714000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.181173s)

                                                
                                                
-- stdout --
	* [ha-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-714000" primary control-plane node in "ha-714000" cluster
	* Restarting existing qemu2 VM for "ha-714000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-714000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:07:50.839985    3083 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:07:50.840114    3083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:07:50.840118    3083 out.go:304] Setting ErrFile to fd 2...
	I0729 04:07:50.840120    3083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:07:50.840250    3083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:07:50.841262    3083 out.go:298] Setting JSON to false
	I0729 04:07:50.857400    3083 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2233,"bootTime":1722249037,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:07:50.857469    3083 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:07:50.862632    3083 out.go:177] * [ha-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:07:50.869664    3083 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:07:50.869694    3083 notify.go:220] Checking for updates...
	I0729 04:07:50.876747    3083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:07:50.879595    3083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:07:50.882550    3083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:07:50.885590    3083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:07:50.888582    3083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:07:50.891840    3083 config.go:182] Loaded profile config "ha-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:07:50.892097    3083 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:07:50.896591    3083 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:07:50.903521    3083 start.go:297] selected driver: qemu2
	I0729 04:07:50.903527    3083 start.go:901] validating driver "qemu2" against &{Name:ha-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-714000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:07:50.903591    3083 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:07:50.905918    3083 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:07:50.905945    3083 cni.go:84] Creating CNI manager for ""
	I0729 04:07:50.905957    3083 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 04:07:50.906007    3083 start.go:340] cluster config:
	{Name:ha-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-714000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:07:50.909770    3083 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:07:50.918400    3083 out.go:177] * Starting "ha-714000" primary control-plane node in "ha-714000" cluster
	I0729 04:07:50.922551    3083 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:07:50.922571    3083 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:07:50.922582    3083 cache.go:56] Caching tarball of preloaded images
	I0729 04:07:50.922643    3083 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:07:50.922648    3083 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:07:50.922712    3083 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/ha-714000/config.json ...
	I0729 04:07:50.923114    3083 start.go:360] acquireMachinesLock for ha-714000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:07:50.923147    3083 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "ha-714000"
	I0729 04:07:50.923157    3083 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:07:50.923163    3083 fix.go:54] fixHost starting: 
	I0729 04:07:50.923277    3083 fix.go:112] recreateIfNeeded on ha-714000: state=Stopped err=<nil>
	W0729 04:07:50.923286    3083 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:07:50.927446    3083 out.go:177] * Restarting existing qemu2 VM for "ha-714000" ...
	I0729 04:07:50.935590    3083 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:07:50.935637    3083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:97:2e:e0:45:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/disk.qcow2
	I0729 04:07:50.937521    3083 main.go:141] libmachine: STDOUT: 
	I0729 04:07:50.937540    3083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:07:50.937565    3083 fix.go:56] duration metric: took 14.401209ms for fixHost
	I0729 04:07:50.937570    3083 start.go:83] releasing machines lock for "ha-714000", held for 14.418458ms
	W0729 04:07:50.937576    3083 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:07:50.937610    3083 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:07:50.937614    3083 start.go:729] Will try again in 5 seconds ...
	I0729 04:07:55.939581    3083 start.go:360] acquireMachinesLock for ha-714000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:07:55.939965    3083 start.go:364] duration metric: took 307.417µs to acquireMachinesLock for "ha-714000"
	I0729 04:07:55.940116    3083 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:07:55.940136    3083 fix.go:54] fixHost starting: 
	I0729 04:07:55.940779    3083 fix.go:112] recreateIfNeeded on ha-714000: state=Stopped err=<nil>
	W0729 04:07:55.940804    3083 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:07:55.945217    3083 out.go:177] * Restarting existing qemu2 VM for "ha-714000" ...
	I0729 04:07:55.953131    3083 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:07:55.953323    3083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:97:2e:e0:45:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/ha-714000/disk.qcow2
	I0729 04:07:55.961947    3083 main.go:141] libmachine: STDOUT: 
	I0729 04:07:55.962012    3083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:07:55.962075    3083 fix.go:56] duration metric: took 21.94125ms for fixHost
	I0729 04:07:55.962093    3083 start.go:83] releasing machines lock for "ha-714000", held for 22.10725ms
	W0729 04:07:55.962234    3083 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-714000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-714000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:07:55.969025    3083 out.go:177] 
	W0729 04:07:55.973177    3083 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:07:55.973234    3083 out.go:239] * 
	* 
	W0729 04:07:55.976068    3083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:07:55.985969    3083 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-714000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000: exit status 7 (66.654834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-714000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-714000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-714000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-714000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-714000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000: exit status 7 (29.243083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-714000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-714000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-714000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.254834ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-714000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-714000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:07:56.169040    3098 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:07:56.169207    3098 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:07:56.169209    3098 out.go:304] Setting ErrFile to fd 2...
	I0729 04:07:56.169212    3098 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:07:56.169335    3098 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:07:56.169576    3098 mustload.go:65] Loading cluster: ha-714000
	I0729 04:07:56.169787    3098 config.go:182] Loaded profile config "ha-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0729 04:07:56.170117    3098 out.go:239] ! The control-plane node ha-714000 host is not running (will try others): state=Stopped
	! The control-plane node ha-714000 host is not running (will try others): state=Stopped
	W0729 04:07:56.170215    3098 out.go:239] ! The control-plane node ha-714000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-714000-m02 host is not running (will try others): state=Stopped
	I0729 04:07:56.173523    3098 out.go:177] * The control-plane node ha-714000-m03 host is not running: state=Stopped
	I0729 04:07:56.177493    3098 out.go:177]   To start a cluster, run: "minikube start -p ha-714000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-714000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-714000 -n ha-714000: exit status 7 (29.202875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-714000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-303000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-303000 --driver=qemu2 : exit status 80 (10.146220417s)

                                                
                                                
-- stdout --
	* [image-303000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-303000" primary control-plane node in "image-303000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-303000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-303000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-303000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-303000 -n image-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-303000 -n image-303000: exit status 7 (70.592125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-303000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-510000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-510000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.821160416s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"46f250d6-458d-4ca5-9304-b4a9dacb7032","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-510000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c94f621d-d7f2-4a9b-abfc-fc9393832528","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19336"}}
	{"specversion":"1.0","id":"afa73f8d-d07d-4701-a3ce-2c15e5135a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig"}}
	{"specversion":"1.0","id":"aafaa6e0-1b66-4e83-82b4-0040bce24328","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"6694a20c-9ae4-4199-9136-9e1c81494cd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2ba167e8-cdc9-4a4e-ba58-35a3a4cb02ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube"}}
	{"specversion":"1.0","id":"524ecdec-d55f-417c-83a8-4b79edaee5e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e1479834-8f73-4405-b1bd-605cb9779574","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"729d54ba-395d-4910-806e-bb0e2e6acca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"3c1b3522-0977-46c8-b9be-7c71a787c0e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-510000\" primary control-plane node in \"json-output-510000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e2c68ed-e570-4406-9db3-d08103c88585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c51a7b95-9b8d-4e9a-9147-84e695db2f7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-510000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e7bf0d8-8d52-4adf-89c5-cde92b0f5528","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"8fb9df3e-afa3-42ef-b022-881b7479d5fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"16285b7c-5b8f-4da3-9054-bb471d9e1142","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-510000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"7ba932eb-0ff4-45ae-83dc-6658be6d64bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"cd5edb57-c55c-4c9c-a391-e88d63ef0580","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-510000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-510000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-510000 --output=json --user=testUser: exit status 83 (78.043958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e658df8c-0e57-405f-bbde-adba964dd958","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-510000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"18d7947c-9b61-4e5b-81e1-83e55cb48ce1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-510000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-510000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-510000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-510000 --output=json --user=testUser: exit status 83 (44.072583ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-510000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-510000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-510000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-510000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-113000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-113000 --driver=qemu2 : exit status 80 (9.767309709s)

                                                
                                                
-- stdout --
	* [first-113000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-113000" primary control-plane node in "first-113000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-113000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-113000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-113000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 04:08:30.102798 -0700 PDT m=+2049.335484043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-114000 -n second-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-114000 -n second-114000: exit status 85 (76.163459ms)

                                                
                                                
-- stdout --
	* Profile "second-114000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-114000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-114000" host is not running, skipping log retrieval (state="* Profile \"second-114000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-114000\"")
helpers_test.go:175: Cleaning up "second-114000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-114000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 04:08:30.281945 -0700 PDT m=+2049.514637334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-113000 -n first-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-113000 -n first-113000: exit status 7 (29.401875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-113000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-113000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-113000
--- FAIL: TestMinikubeProfile (10.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-977000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-977000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.958054583s)

                                                
                                                
-- stdout --
	* [mount-start-1-977000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-977000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-977000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-977000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-977000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-977000 -n mount-start-1-977000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-977000 -n mount-start-1-977000: exit status 7 (66.960959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-977000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-369000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-369000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.040620667s)

                                                
                                                
-- stdout --
	* [multinode-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-369000" primary control-plane node in "multinode-369000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-369000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:08:40.621721    3237 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:08:40.621841    3237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:08:40.621845    3237 out.go:304] Setting ErrFile to fd 2...
	I0729 04:08:40.621847    3237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:08:40.622001    3237 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:08:40.623081    3237 out.go:298] Setting JSON to false
	I0729 04:08:40.638970    3237 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2283,"bootTime":1722249037,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:08:40.639040    3237 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:08:40.645551    3237 out.go:177] * [multinode-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:08:40.653447    3237 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:08:40.653498    3237 notify.go:220] Checking for updates...
	I0729 04:08:40.660478    3237 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:08:40.663488    3237 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:08:40.666553    3237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:08:40.669528    3237 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:08:40.671069    3237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:08:40.674652    3237 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:08:40.678511    3237 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:08:40.683496    3237 start.go:297] selected driver: qemu2
	I0729 04:08:40.683504    3237 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:08:40.683512    3237 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:08:40.685815    3237 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:08:40.688570    3237 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:08:40.691638    3237 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:08:40.691664    3237 cni.go:84] Creating CNI manager for ""
	I0729 04:08:40.691676    3237 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 04:08:40.691680    3237 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 04:08:40.691712    3237 start.go:340] cluster config:
	{Name:multinode-369000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:08:40.695355    3237 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:08:40.702513    3237 out.go:177] * Starting "multinode-369000" primary control-plane node in "multinode-369000" cluster
	I0729 04:08:40.706487    3237 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:08:40.706502    3237 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:08:40.706514    3237 cache.go:56] Caching tarball of preloaded images
	I0729 04:08:40.706572    3237 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:08:40.706578    3237 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:08:40.706779    3237 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/multinode-369000/config.json ...
	I0729 04:08:40.706790    3237 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/multinode-369000/config.json: {Name:mk8f76f9d4937b35f63346e56f54e98bea9a8d65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:08:40.707011    3237 start.go:360] acquireMachinesLock for multinode-369000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:08:40.707045    3237 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "multinode-369000"
	I0729 04:08:40.707057    3237 start.go:93] Provisioning new machine with config: &{Name:multinode-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:08:40.707084    3237 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:08:40.713473    3237 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:08:40.730668    3237 start.go:159] libmachine.API.Create for "multinode-369000" (driver="qemu2")
	I0729 04:08:40.730695    3237 client.go:168] LocalClient.Create starting
	I0729 04:08:40.730755    3237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:08:40.730783    3237 main.go:141] libmachine: Decoding PEM data...
	I0729 04:08:40.730791    3237 main.go:141] libmachine: Parsing certificate...
	I0729 04:08:40.730827    3237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:08:40.730849    3237 main.go:141] libmachine: Decoding PEM data...
	I0729 04:08:40.730861    3237 main.go:141] libmachine: Parsing certificate...
	I0729 04:08:40.731245    3237 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:08:40.882904    3237 main.go:141] libmachine: Creating SSH key...
	I0729 04:08:40.995672    3237 main.go:141] libmachine: Creating Disk image...
	I0729 04:08:40.995677    3237 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:08:40.995842    3237 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2
	I0729 04:08:41.004913    3237 main.go:141] libmachine: STDOUT: 
	I0729 04:08:41.004933    3237 main.go:141] libmachine: STDERR: 
	I0729 04:08:41.004984    3237 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2 +20000M
	I0729 04:08:41.012749    3237 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:08:41.012761    3237 main.go:141] libmachine: STDERR: 
	I0729 04:08:41.012771    3237 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2
	I0729 04:08:41.012776    3237 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:08:41.012792    3237 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:08:41.012815    3237 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:5b:f0:1e:b7:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2
	I0729 04:08:41.014398    3237 main.go:141] libmachine: STDOUT: 
	I0729 04:08:41.014410    3237 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:08:41.014432    3237 client.go:171] duration metric: took 283.741625ms to LocalClient.Create
	I0729 04:08:43.016546    3237 start.go:128] duration metric: took 2.309510875s to createHost
	I0729 04:08:43.016611    3237 start.go:83] releasing machines lock for "multinode-369000", held for 2.309631375s
	W0729 04:08:43.016706    3237 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:08:43.030058    3237 out.go:177] * Deleting "multinode-369000" in qemu2 ...
	W0729 04:08:43.062349    3237 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:08:43.062376    3237 start.go:729] Will try again in 5 seconds ...
	I0729 04:08:48.064465    3237 start.go:360] acquireMachinesLock for multinode-369000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:08:48.064992    3237 start.go:364] duration metric: took 373.084µs to acquireMachinesLock for "multinode-369000"
	I0729 04:08:48.065132    3237 start.go:93] Provisioning new machine with config: &{Name:multinode-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:08:48.065399    3237 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:08:48.075982    3237 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:08:48.127641    3237 start.go:159] libmachine.API.Create for "multinode-369000" (driver="qemu2")
	I0729 04:08:48.127695    3237 client.go:168] LocalClient.Create starting
	I0729 04:08:48.127788    3237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:08:48.127851    3237 main.go:141] libmachine: Decoding PEM data...
	I0729 04:08:48.127866    3237 main.go:141] libmachine: Parsing certificate...
	I0729 04:08:48.127927    3237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:08:48.127969    3237 main.go:141] libmachine: Decoding PEM data...
	I0729 04:08:48.127982    3237 main.go:141] libmachine: Parsing certificate...
	I0729 04:08:48.128519    3237 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:08:48.290501    3237 main.go:141] libmachine: Creating SSH key...
	I0729 04:08:48.560524    3237 main.go:141] libmachine: Creating Disk image...
	I0729 04:08:48.560532    3237 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:08:48.560770    3237 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2
	I0729 04:08:48.570663    3237 main.go:141] libmachine: STDOUT: 
	I0729 04:08:48.570687    3237 main.go:141] libmachine: STDERR: 
	I0729 04:08:48.570752    3237 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2 +20000M
	I0729 04:08:48.578567    3237 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:08:48.578585    3237 main.go:141] libmachine: STDERR: 
	I0729 04:08:48.578594    3237 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2
	I0729 04:08:48.578600    3237 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:08:48.578607    3237 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:08:48.578634    3237 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:7f:f3:47:1a:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2
	I0729 04:08:48.580242    3237 main.go:141] libmachine: STDOUT: 
	I0729 04:08:48.580266    3237 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:08:48.580280    3237 client.go:171] duration metric: took 452.595042ms to LocalClient.Create
	I0729 04:08:50.582385    3237 start.go:128] duration metric: took 2.517037792s to createHost
	I0729 04:08:50.582439    3237 start.go:83] releasing machines lock for "multinode-369000", held for 2.517495708s
	W0729 04:08:50.582764    3237 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-369000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-369000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:08:50.597451    3237 out.go:177] 
	W0729 04:08:50.607514    3237 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:08:50.607539    3237 out.go:239] * 
	* 
	W0729 04:08:50.610095    3237 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:08:50.620460    3237 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-369000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (65.837375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (85.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (123.758291ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-369000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- rollout status deployment/busybox: exit status 1 (58.428125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.05725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.034375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.158958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.059917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.211375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.092417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.538459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.9035ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.278791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0729 04:10:14.748338    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.805042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.1315ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- exec  -- nslookup kubernetes.io: exit status 1 (54.322916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.651083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.873458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (29.148292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (85.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.412125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (29.284833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-369000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-369000 -v 3 --alsologtostderr: exit status 83 (39.858792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-369000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:16.171320    3324 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:16.171466    3324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:16.171469    3324 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:16.171471    3324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:16.171594    3324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:16.171813    3324 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:16.171990    3324 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:16.175717    3324 out.go:177] * The control-plane node multinode-369000 host is not running: state=Stopped
	I0729 04:10:16.179712    3324 out.go:177]   To start a cluster, run: "minikube start -p multinode-369000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-369000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (29.022375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-369000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-369000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.03375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-369000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-369000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-369000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (29.722958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-369000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-369000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-369000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-369000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (29.357833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status --output json --alsologtostderr: exit status 7 (28.440792ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-369000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:16.373010    3336 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:16.373164    3336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:16.373167    3336 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:16.373170    3336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:16.373309    3336 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:16.373426    3336 out.go:298] Setting JSON to true
	I0729 04:10:16.373435    3336 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:16.373501    3336 notify.go:220] Checking for updates...
	I0729 04:10:16.373621    3336 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:16.373628    3336 status.go:255] checking status of multinode-369000 ...
	I0729 04:10:16.373831    3336 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:10:16.373835    3336 status.go:343] host is not running, skipping remaining checks
	I0729 04:10:16.373837    3336 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-369000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (29.359209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 node stop m03: exit status 85 (46.831708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-369000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status: exit status 7 (28.724833ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status --alsologtostderr: exit status 7 (28.706542ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:16.507440    3344 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:16.507584    3344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:16.507587    3344 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:16.507589    3344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:16.507729    3344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:16.507852    3344 out.go:298] Setting JSON to false
	I0729 04:10:16.507862    3344 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:16.507906    3344 notify.go:220] Checking for updates...
	I0729 04:10:16.508074    3344 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:16.508081    3344 status.go:255] checking status of multinode-369000 ...
	I0729 04:10:16.508289    3344 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:10:16.508293    3344 status.go:343] host is not running, skipping remaining checks
	I0729 04:10:16.508295    3344 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-369000 status --alsologtostderr": multinode-369000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (29.052917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.066166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:16.566141    3348 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:16.566381    3348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:16.566385    3348 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:16.566387    3348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:16.566526    3348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:16.566754    3348 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:16.566954    3348 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:16.571673    3348 out.go:177] 
	W0729 04:10:16.574742    3348 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 04:10:16.574746    3348 out.go:239] * 
	* 
	W0729 04:10:16.576375    3348 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:10:16.577664    3348 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 04:10:16.566141    3348 out.go:291] Setting OutFile to fd 1 ...
I0729 04:10:16.566381    3348 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:10:16.566385    3348 out.go:304] Setting ErrFile to fd 2...
I0729 04:10:16.566387    3348 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:10:16.566526    3348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
I0729 04:10:16.566754    3348 mustload.go:65] Loading cluster: multinode-369000
I0729 04:10:16.566954    3348 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:10:16.571673    3348 out.go:177] 
W0729 04:10:16.574742    3348 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 04:10:16.574746    3348 out.go:239] * 
* 
W0729 04:10:16.576375    3348 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 04:10:16.577664    3348 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-369000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr: exit status 7 (28.780833ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:16.609787    3350 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:16.609927    3350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:16.609930    3350 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:16.609933    3350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:16.610070    3350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:16.610190    3350 out.go:298] Setting JSON to false
	I0729 04:10:16.610200    3350 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:16.610265    3350 notify.go:220] Checking for updates...
	I0729 04:10:16.610422    3350 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:16.610428    3350 status.go:255] checking status of multinode-369000 ...
	I0729 04:10:16.610624    3350 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:10:16.610628    3350 status.go:343] host is not running, skipping remaining checks
	I0729 04:10:16.610630    3350 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr: exit status 7 (71.514458ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:17.493504    3352 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:17.493728    3352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:17.493737    3352 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:17.493740    3352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:17.493912    3352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:17.494098    3352 out.go:298] Setting JSON to false
	I0729 04:10:17.494112    3352 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:17.494161    3352 notify.go:220] Checking for updates...
	I0729 04:10:17.494359    3352 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:17.494367    3352 status.go:255] checking status of multinode-369000 ...
	I0729 04:10:17.494635    3352 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:10:17.494640    3352 status.go:343] host is not running, skipping remaining checks
	I0729 04:10:17.494643    3352 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr: exit status 7 (71.567375ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:18.680775    3354 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:18.680962    3354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:18.680967    3354 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:18.680970    3354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:18.681137    3354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:18.681301    3354 out.go:298] Setting JSON to false
	I0729 04:10:18.681313    3354 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:18.681358    3354 notify.go:220] Checking for updates...
	I0729 04:10:18.681579    3354 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:18.681589    3354 status.go:255] checking status of multinode-369000 ...
	I0729 04:10:18.681867    3354 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:10:18.681872    3354 status.go:343] host is not running, skipping remaining checks
	I0729 04:10:18.681875    3354 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr: exit status 7 (73.534ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:21.953305    3356 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:21.953499    3356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:21.953504    3356 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:21.953507    3356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:21.953691    3356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:21.953876    3356 out.go:298] Setting JSON to false
	I0729 04:10:21.953889    3356 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:21.953933    3356 notify.go:220] Checking for updates...
	I0729 04:10:21.954169    3356 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:21.954181    3356 status.go:255] checking status of multinode-369000 ...
	I0729 04:10:21.954517    3356 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:10:21.954522    3356 status.go:343] host is not running, skipping remaining checks
	I0729 04:10:21.954526    3356 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr: exit status 7 (71.778916ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:25.574200    3358 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:25.574433    3358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:25.574447    3358 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:25.574461    3358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:25.574640    3358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:25.574805    3358 out.go:298] Setting JSON to false
	I0729 04:10:25.574818    3358 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:25.574864    3358 notify.go:220] Checking for updates...
	I0729 04:10:25.575103    3358 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:25.575112    3358 status.go:255] checking status of multinode-369000 ...
	I0729 04:10:25.575402    3358 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:10:25.575406    3358 status.go:343] host is not running, skipping remaining checks
	I0729 04:10:25.575409    3358 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr: exit status 7 (72.150666ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:28.818055    3360 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:28.818247    3360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:28.818252    3360 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:28.818255    3360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:28.818438    3360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:28.818594    3360 out.go:298] Setting JSON to false
	I0729 04:10:28.818606    3360 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:28.818652    3360 notify.go:220] Checking for updates...
	I0729 04:10:28.818860    3360 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:28.818868    3360 status.go:255] checking status of multinode-369000 ...
	I0729 04:10:28.819156    3360 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:10:28.819161    3360 status.go:343] host is not running, skipping remaining checks
	I0729 04:10:28.819164    3360 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr: exit status 7 (70.563708ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:36.427452    3366 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:36.427669    3366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:36.427674    3366 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:36.427677    3366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:36.427876    3366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:36.428054    3366 out.go:298] Setting JSON to false
	I0729 04:10:36.428066    3366 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:36.428115    3366 notify.go:220] Checking for updates...
	I0729 04:10:36.428356    3366 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:36.428364    3366 status.go:255] checking status of multinode-369000 ...
	I0729 04:10:36.428634    3366 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:10:36.428639    3366 status.go:343] host is not running, skipping remaining checks
	I0729 04:10:36.428645    3366 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr: exit status 7 (71.649708ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:10:52.999920    3373 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:10:53.000121    3373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:53.000126    3373 out.go:304] Setting ErrFile to fd 2...
	I0729 04:10:53.000129    3373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:10:53.000299    3373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:10:53.000477    3373 out.go:298] Setting JSON to false
	I0729 04:10:53.000489    3373 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:10:53.000532    3373 notify.go:220] Checking for updates...
	I0729 04:10:53.000740    3373 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:10:53.000749    3373 status.go:255] checking status of multinode-369000 ...
	I0729 04:10:53.001039    3373 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:10:53.001044    3373 status.go:343] host is not running, skipping remaining checks
	I0729 04:10:53.001047    3373 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr: exit status 7 (71.453584ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:11:04.684050    3377 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:11:04.684245    3377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:04.684249    3377 out.go:304] Setting ErrFile to fd 2...
	I0729 04:11:04.684252    3377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:04.684406    3377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:11:04.684569    3377 out.go:298] Setting JSON to false
	I0729 04:11:04.684581    3377 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:11:04.684624    3377 notify.go:220] Checking for updates...
	I0729 04:11:04.684859    3377 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:11:04.684867    3377 status.go:255] checking status of multinode-369000 ...
	I0729 04:11:04.685136    3377 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:11:04.685141    3377 status.go:343] host is not running, skipping remaining checks
	I0729 04:11:04.685144    3377 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-369000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (32.961792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (48.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-369000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-369000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-369000: (3.56187625s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-369000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-369000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.216942875s)

                                                
                                                
-- stdout --
	* [multinode-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-369000" primary control-plane node in "multinode-369000" cluster
	* Restarting existing qemu2 VM for "multinode-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:11:08.371520    3401 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:11:08.371686    3401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:08.371690    3401 out.go:304] Setting ErrFile to fd 2...
	I0729 04:11:08.371693    3401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:08.371857    3401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:11:08.373068    3401 out.go:298] Setting JSON to false
	I0729 04:11:08.392484    3401 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2431,"bootTime":1722249037,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:11:08.392556    3401 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:11:08.397137    3401 out.go:177] * [multinode-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:11:08.403977    3401 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:11:08.404035    3401 notify.go:220] Checking for updates...
	I0729 04:11:08.411051    3401 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:11:08.414000    3401 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:11:08.416993    3401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:11:08.420050    3401 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:11:08.423042    3401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:11:08.426388    3401 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:11:08.426450    3401 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:11:08.431023    3401 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:11:08.438025    3401 start.go:297] selected driver: qemu2
	I0729 04:11:08.438033    3401 start.go:901] validating driver "qemu2" against &{Name:multinode-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:11:08.438096    3401 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:11:08.440542    3401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:11:08.440566    3401 cni.go:84] Creating CNI manager for ""
	I0729 04:11:08.440572    3401 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:11:08.440630    3401 start.go:340] cluster config:
	{Name:multinode-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-369000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:11:08.444443    3401 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:08.449981    3401 out.go:177] * Starting "multinode-369000" primary control-plane node in "multinode-369000" cluster
	I0729 04:11:08.453994    3401 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:11:08.454007    3401 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:11:08.454017    3401 cache.go:56] Caching tarball of preloaded images
	I0729 04:11:08.454074    3401 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:11:08.454080    3401 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:11:08.454133    3401 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/multinode-369000/config.json ...
	I0729 04:11:08.454586    3401 start.go:360] acquireMachinesLock for multinode-369000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:11:08.454626    3401 start.go:364] duration metric: took 33.167µs to acquireMachinesLock for "multinode-369000"
	I0729 04:11:08.454637    3401 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:11:08.454643    3401 fix.go:54] fixHost starting: 
	I0729 04:11:08.454772    3401 fix.go:112] recreateIfNeeded on multinode-369000: state=Stopped err=<nil>
	W0729 04:11:08.454781    3401 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:11:08.458996    3401 out.go:177] * Restarting existing qemu2 VM for "multinode-369000" ...
	I0729 04:11:08.466894    3401 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:11:08.466932    3401 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:7f:f3:47:1a:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2
	I0729 04:11:08.469029    3401 main.go:141] libmachine: STDOUT: 
	I0729 04:11:08.469048    3401 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:11:08.469080    3401 fix.go:56] duration metric: took 14.436875ms for fixHost
	I0729 04:11:08.469085    3401 start.go:83] releasing machines lock for "multinode-369000", held for 14.455208ms
	W0729 04:11:08.469091    3401 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:11:08.469125    3401 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:11:08.469131    3401 start.go:729] Will try again in 5 seconds ...
	I0729 04:11:13.471202    3401 start.go:360] acquireMachinesLock for multinode-369000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:11:13.471633    3401 start.go:364] duration metric: took 309.167µs to acquireMachinesLock for "multinode-369000"
	I0729 04:11:13.471765    3401 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:11:13.471783    3401 fix.go:54] fixHost starting: 
	I0729 04:11:13.472457    3401 fix.go:112] recreateIfNeeded on multinode-369000: state=Stopped err=<nil>
	W0729 04:11:13.472483    3401 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:11:13.476863    3401 out.go:177] * Restarting existing qemu2 VM for "multinode-369000" ...
	I0729 04:11:13.481855    3401 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:11:13.482155    3401 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:7f:f3:47:1a:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2
	I0729 04:11:13.490837    3401 main.go:141] libmachine: STDOUT: 
	I0729 04:11:13.490932    3401 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:11:13.491020    3401 fix.go:56] duration metric: took 19.233292ms for fixHost
	I0729 04:11:13.491043    3401 start.go:83] releasing machines lock for "multinode-369000", held for 19.387459ms
	W0729 04:11:13.491236    3401 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:11:13.498976    3401 out.go:177] 
	W0729 04:11:13.502953    3401 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:11:13.502984    3401 out.go:239] * 
	* 
	W0729 04:11:13.505762    3401 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:11:13.514802    3401 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-369000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-369000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (31.232417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 node delete m03: exit status 83 (36.151291ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-369000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-369000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status --alsologtostderr: exit status 7 (28.5925ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:11:13.690551    3415 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:11:13.690699    3415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:13.690702    3415 out.go:304] Setting ErrFile to fd 2...
	I0729 04:11:13.690705    3415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:13.690833    3415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:11:13.690957    3415 out.go:298] Setting JSON to false
	I0729 04:11:13.690966    3415 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:11:13.691024    3415 notify.go:220] Checking for updates...
	I0729 04:11:13.691181    3415 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:11:13.691187    3415 status.go:255] checking status of multinode-369000 ...
	I0729 04:11:13.691401    3415 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:11:13.691404    3415 status.go:343] host is not running, skipping remaining checks
	I0729 04:11:13.691406    3415 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-369000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (28.435958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-369000 stop: (3.227442166s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status: exit status 7 (62.573625ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-369000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-369000 status --alsologtostderr: exit status 7 (32.425833ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:11:17.041904    3439 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:11:17.042062    3439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:17.042066    3439 out.go:304] Setting ErrFile to fd 2...
	I0729 04:11:17.042068    3439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:17.042196    3439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:11:17.042314    3439 out.go:298] Setting JSON to false
	I0729 04:11:17.042324    3439 mustload.go:65] Loading cluster: multinode-369000
	I0729 04:11:17.042376    3439 notify.go:220] Checking for updates...
	I0729 04:11:17.042529    3439 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:11:17.042535    3439 status.go:255] checking status of multinode-369000 ...
	I0729 04:11:17.042769    3439 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I0729 04:11:17.042772    3439 status.go:343] host is not running, skipping remaining checks
	I0729 04:11:17.042775    3439 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-369000 status --alsologtostderr": multinode-369000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-369000 status --alsologtostderr": multinode-369000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (29.246208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-369000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-369000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18108575s)

                                                
                                                
-- stdout --
	* [multinode-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-369000" primary control-plane node in "multinode-369000" cluster
	* Restarting existing qemu2 VM for "multinode-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:11:17.099541    3443 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:11:17.099655    3443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:17.099658    3443 out.go:304] Setting ErrFile to fd 2...
	I0729 04:11:17.099660    3443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:17.099795    3443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:11:17.100880    3443 out.go:298] Setting JSON to false
	I0729 04:11:17.117105    3443 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2440,"bootTime":1722249037,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:11:17.117255    3443 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:11:17.122053    3443 out.go:177] * [multinode-369000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:11:17.129031    3443 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:11:17.129076    3443 notify.go:220] Checking for updates...
	I0729 04:11:17.135923    3443 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:11:17.139004    3443 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:11:17.142001    3443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:11:17.144970    3443 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:11:17.148011    3443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:11:17.151287    3443 config.go:182] Loaded profile config "multinode-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:11:17.151556    3443 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:11:17.155949    3443 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:11:17.162910    3443 start.go:297] selected driver: qemu2
	I0729 04:11:17.162919    3443 start.go:901] validating driver "qemu2" against &{Name:multinode-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:11:17.162988    3443 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:11:17.165177    3443 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:11:17.165198    3443 cni.go:84] Creating CNI manager for ""
	I0729 04:11:17.165203    3443 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:11:17.165252    3443 start.go:340] cluster config:
	{Name:multinode-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-369000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:11:17.168499    3443 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:17.175812    3443 out.go:177] * Starting "multinode-369000" primary control-plane node in "multinode-369000" cluster
	I0729 04:11:17.180005    3443 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:11:17.180019    3443 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:11:17.180030    3443 cache.go:56] Caching tarball of preloaded images
	I0729 04:11:17.180079    3443 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:11:17.180085    3443 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:11:17.180144    3443 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/multinode-369000/config.json ...
	I0729 04:11:17.180568    3443 start.go:360] acquireMachinesLock for multinode-369000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:11:17.180597    3443 start.go:364] duration metric: took 23.583µs to acquireMachinesLock for "multinode-369000"
	I0729 04:11:17.180608    3443 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:11:17.180615    3443 fix.go:54] fixHost starting: 
	I0729 04:11:17.180739    3443 fix.go:112] recreateIfNeeded on multinode-369000: state=Stopped err=<nil>
	W0729 04:11:17.180748    3443 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:11:17.188926    3443 out.go:177] * Restarting existing qemu2 VM for "multinode-369000" ...
	I0729 04:11:17.192960    3443 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:11:17.193001    3443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:7f:f3:47:1a:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2
	I0729 04:11:17.194985    3443 main.go:141] libmachine: STDOUT: 
	I0729 04:11:17.195004    3443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:11:17.195034    3443 fix.go:56] duration metric: took 14.419417ms for fixHost
	I0729 04:11:17.195040    3443 start.go:83] releasing machines lock for "multinode-369000", held for 14.438791ms
	W0729 04:11:17.195046    3443 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:11:17.195079    3443 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:11:17.195084    3443 start.go:729] Will try again in 5 seconds ...
	I0729 04:11:22.195561    3443 start.go:360] acquireMachinesLock for multinode-369000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:11:22.196141    3443 start.go:364] duration metric: took 403.125µs to acquireMachinesLock for "multinode-369000"
	I0729 04:11:22.196281    3443 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:11:22.196301    3443 fix.go:54] fixHost starting: 
	I0729 04:11:22.197005    3443 fix.go:112] recreateIfNeeded on multinode-369000: state=Stopped err=<nil>
	W0729 04:11:22.197033    3443 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:11:22.201498    3443 out.go:177] * Restarting existing qemu2 VM for "multinode-369000" ...
	I0729 04:11:22.209533    3443 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:11:22.209759    3443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:7f:f3:47:1a:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/multinode-369000/disk.qcow2
	I0729 04:11:22.219336    3443 main.go:141] libmachine: STDOUT: 
	I0729 04:11:22.219398    3443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:11:22.219491    3443 fix.go:56] duration metric: took 23.192791ms for fixHost
	I0729 04:11:22.219510    3443 start.go:83] releasing machines lock for "multinode-369000", held for 23.34725ms
	W0729 04:11:22.219696    3443 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:11:22.227502    3443 out.go:177] 
	W0729 04:11:22.231460    3443 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:11:22.231482    3443 out.go:239] * 
	* 
	W0729 04:11:22.233804    3443 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:11:22.241593    3443 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-369000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (66.6155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-369000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-369000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-369000-m01 --driver=qemu2 : exit status 80 (9.870215458s)

                                                
                                                
-- stdout --
	* [multinode-369000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-369000-m01" primary control-plane node in "multinode-369000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-369000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-369000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-369000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-369000-m02 --driver=qemu2 : exit status 80 (9.886187375s)

                                                
                                                
-- stdout --
	* [multinode-369000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-369000-m02" primary control-plane node in "multinode-369000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-369000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-369000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-369000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-369000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-369000: exit status 83 (79.275708ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-369000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-369000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-369000 -n multinode-369000: exit status 7 (30.114416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.98s)

                                                
                                    
x
+
TestPreload (10.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-549000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-549000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.979347917s)

                                                
                                                
-- stdout --
	* [test-preload-549000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-549000" primary control-plane node in "test-preload-549000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-549000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:11:42.435909    3498 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:11:42.436043    3498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:42.436047    3498 out.go:304] Setting ErrFile to fd 2...
	I0729 04:11:42.436049    3498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:11:42.436172    3498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:11:42.437195    3498 out.go:298] Setting JSON to false
	I0729 04:11:42.452963    3498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2465,"bootTime":1722249037,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:11:42.453055    3498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:11:42.459601    3498 out.go:177] * [test-preload-549000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:11:42.467525    3498 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:11:42.467558    3498 notify.go:220] Checking for updates...
	I0729 04:11:42.475479    3498 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:11:42.478512    3498 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:11:42.481583    3498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:11:42.482971    3498 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:11:42.486481    3498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:11:42.489817    3498 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:11:42.489870    3498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:11:42.494361    3498 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:11:42.501558    3498 start.go:297] selected driver: qemu2
	I0729 04:11:42.501564    3498 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:11:42.501571    3498 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:11:42.503611    3498 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:11:42.506578    3498 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:11:42.509654    3498 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:11:42.509674    3498 cni.go:84] Creating CNI manager for ""
	I0729 04:11:42.509681    3498 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:11:42.509686    3498 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:11:42.509729    3498 start.go:340] cluster config:
	{Name:test-preload-549000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:11:42.513224    3498 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:42.519495    3498 out.go:177] * Starting "test-preload-549000" primary control-plane node in "test-preload-549000" cluster
	I0729 04:11:42.523491    3498 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0729 04:11:42.523609    3498 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/test-preload-549000/config.json ...
	I0729 04:11:42.523616    3498 cache.go:107] acquiring lock: {Name:mk2df94b52ac637de48a5553a8a3fa7c9ef4ed93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:42.523630    3498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/test-preload-549000/config.json: {Name:mk5051e1bdad3e75c6810f69d1988fb1697ee67a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:11:42.523620    3498 cache.go:107] acquiring lock: {Name:mk3a7b6239213a0a6b022c71439755de57ea72c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:42.523625    3498 cache.go:107] acquiring lock: {Name:mk81eb27269b0e193a160e106cce48f25961cf29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:42.523644    3498 cache.go:107] acquiring lock: {Name:mk6eb27e77b897e98a759130bf656dd4e484fe9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:42.523671    3498 cache.go:107] acquiring lock: {Name:mk16e34783c8ac41529b54f300643303e0792470 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:42.523812    3498 cache.go:107] acquiring lock: {Name:mk99de47d5a70e843496e5525486b9defa5edcc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:42.523882    3498 cache.go:107] acquiring lock: {Name:mk26c6efcbde1249e60a450f5cf0cbec68701698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:42.523889    3498 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 04:11:42.523889    3498 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 04:11:42.523923    3498 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 04:11:42.523977    3498 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 04:11:42.523997    3498 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:11:42.524070    3498 cache.go:107] acquiring lock: {Name:mk659f7f831b49b65350d757edca3ac2bbde14e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:11:42.524171    3498 start.go:360] acquireMachinesLock for test-preload-549000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:11:42.524189    3498 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:11:42.524235    3498 start.go:364] duration metric: took 55.25µs to acquireMachinesLock for "test-preload-549000"
	I0729 04:11:42.524273    3498 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:11:42.524248    3498 start.go:93] Provisioning new machine with config: &{Name:test-preload-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:11:42.524291    3498 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:11:42.524286    3498 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:11:42.531465    3498 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:11:42.535808    3498 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 04:11:42.535862    3498 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:11:42.535941    3498 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 04:11:42.536549    3498 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:11:42.536611    3498 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 04:11:42.536638    3498 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:11:42.538337    3498 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 04:11:42.538386    3498 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:11:42.548699    3498 start.go:159] libmachine.API.Create for "test-preload-549000" (driver="qemu2")
	I0729 04:11:42.548719    3498 client.go:168] LocalClient.Create starting
	I0729 04:11:42.548815    3498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:11:42.548843    3498 main.go:141] libmachine: Decoding PEM data...
	I0729 04:11:42.548852    3498 main.go:141] libmachine: Parsing certificate...
	I0729 04:11:42.548891    3498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:11:42.548913    3498 main.go:141] libmachine: Decoding PEM data...
	I0729 04:11:42.548920    3498 main.go:141] libmachine: Parsing certificate...
	I0729 04:11:42.549260    3498 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:11:42.701353    3498 main.go:141] libmachine: Creating SSH key...
	I0729 04:11:42.817396    3498 main.go:141] libmachine: Creating Disk image...
	I0729 04:11:42.817414    3498 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:11:42.817594    3498 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2
	I0729 04:11:42.827186    3498 main.go:141] libmachine: STDOUT: 
	I0729 04:11:42.827207    3498 main.go:141] libmachine: STDERR: 
	I0729 04:11:42.827255    3498 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2 +20000M
	I0729 04:11:42.836135    3498 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:11:42.836157    3498 main.go:141] libmachine: STDERR: 
	I0729 04:11:42.836172    3498 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2
	I0729 04:11:42.836178    3498 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:11:42.836192    3498 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:11:42.836228    3498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:b5:5f:da:fe:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2
	I0729 04:11:42.838364    3498 main.go:141] libmachine: STDOUT: 
	I0729 04:11:42.838378    3498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:11:42.838392    3498 client.go:171] duration metric: took 289.677084ms to LocalClient.Create
	I0729 04:11:42.988087    3498 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 04:11:43.009734    3498 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 04:11:43.034736    3498 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:11:43.035839    3498 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0729 04:11:43.075080    3498 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:11:43.075115    3498 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:11:43.079795    3498 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 04:11:43.123356    3498 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 04:11:43.255303    3498 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0729 04:11:43.255371    3498 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 731.621542ms
	I0729 04:11:43.255410    3498 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0729 04:11:43.387955    3498 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:11:43.388041    3498 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 04:11:43.661954    3498 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 04:11:43.662007    3498 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.138425292s
	I0729 04:11:43.662043    3498 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 04:11:44.838622    3498 start.go:128] duration metric: took 2.314377s to createHost
	I0729 04:11:44.838664    3498 start.go:83] releasing machines lock for "test-preload-549000", held for 2.314494375s
	W0729 04:11:44.838713    3498 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:11:44.850699    3498 out.go:177] * Deleting "test-preload-549000" in qemu2 ...
	W0729 04:11:44.879560    3498 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:11:44.879594    3498 start.go:729] Will try again in 5 seconds ...
	I0729 04:11:45.060133    3498 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0729 04:11:45.060183    3498 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.536188875s
	I0729 04:11:45.060209    3498 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0729 04:11:45.770438    3498 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0729 04:11:45.770490    3498 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.246945708s
	I0729 04:11:45.770512    3498 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0729 04:11:46.269995    3498 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0729 04:11:46.270038    3498 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.746542459s
	I0729 04:11:46.270063    3498 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0729 04:11:46.919694    3498 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0729 04:11:46.919736    3498 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.396259208s
	I0729 04:11:46.919758    3498 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0729 04:11:47.416960    3498 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0729 04:11:47.417006    3498 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.893488125s
	I0729 04:11:47.417030    3498 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0729 04:11:49.879650    3498 start.go:360] acquireMachinesLock for test-preload-549000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:11:49.880058    3498 start.go:364] duration metric: took 328.458µs to acquireMachinesLock for "test-preload-549000"
	I0729 04:11:49.880187    3498 start.go:93] Provisioning new machine with config: &{Name:test-preload-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:11:49.880559    3498 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:11:49.890211    3498 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:11:49.940579    3498 start.go:159] libmachine.API.Create for "test-preload-549000" (driver="qemu2")
	I0729 04:11:49.940686    3498 client.go:168] LocalClient.Create starting
	I0729 04:11:49.940802    3498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:11:49.940881    3498 main.go:141] libmachine: Decoding PEM data...
	I0729 04:11:49.940899    3498 main.go:141] libmachine: Parsing certificate...
	I0729 04:11:49.940958    3498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:11:49.941013    3498 main.go:141] libmachine: Decoding PEM data...
	I0729 04:11:49.941026    3498 main.go:141] libmachine: Parsing certificate...
	I0729 04:11:49.941550    3498 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:11:50.104865    3498 main.go:141] libmachine: Creating SSH key...
	I0729 04:11:50.315447    3498 main.go:141] libmachine: Creating Disk image...
	I0729 04:11:50.315455    3498 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:11:50.315683    3498 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2
	I0729 04:11:50.325518    3498 main.go:141] libmachine: STDOUT: 
	I0729 04:11:50.325556    3498 main.go:141] libmachine: STDERR: 
	I0729 04:11:50.325611    3498 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2 +20000M
	I0729 04:11:50.333744    3498 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:11:50.333757    3498 main.go:141] libmachine: STDERR: 
	I0729 04:11:50.333770    3498 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2
	I0729 04:11:50.333773    3498 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:11:50.333785    3498 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:11:50.333820    3498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:0e:31:0f:46:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/test-preload-549000/disk.qcow2
	I0729 04:11:50.335468    3498 main.go:141] libmachine: STDOUT: 
	I0729 04:11:50.335530    3498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:11:50.335542    3498 client.go:171] duration metric: took 394.863167ms to LocalClient.Create
	I0729 04:11:52.336779    3498 start.go:128] duration metric: took 2.456256375s to createHost
	I0729 04:11:52.336847    3498 start.go:83] releasing machines lock for "test-preload-549000", held for 2.456841333s
	W0729 04:11:52.337146    3498 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-549000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-549000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:11:52.352653    3498 out.go:177] 
	W0729 04:11:52.356588    3498 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:11:52.356615    3498 out.go:239] * 
	* 
	W0729 04:11:52.359467    3498 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:11:52.372584    3498 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-549000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-29 04:11:52.390588 -0700 PDT m=+2251.629826584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-549000 -n test-preload-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-549000 -n test-preload-549000: exit status 7 (66.726667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-549000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-549000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-549000
--- FAIL: TestPreload (10.13s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-202000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-202000 --memory=2048 --driver=qemu2 : exit status 80 (9.887497125s)

                                                
                                                
-- stdout --
	* [scheduled-stop-202000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-202000" primary control-plane node in "scheduled-stop-202000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-202000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-202000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-202000" primary control-plane node in "scheduled-stop-202000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-202000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 04:12:02.424108 -0700 PDT m=+2261.663671376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-202000 -n scheduled-stop-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-202000 -n scheduled-stop-202000: exit status 7 (67.349834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-202000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-202000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-202000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (12.13s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2045086448 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2045086448 version: (1.06715325s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-454000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-454000 --memory=2600 --driver=qemu2 : exit status 80 (9.769543208s)

                                                
                                                
-- stdout --
	* [skaffold-454000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-454000" primary control-plane node in "skaffold-454000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-454000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-454000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-454000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-454000" primary control-plane node in "skaffold-454000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-454000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-454000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 04:12:14.558438 -0700 PDT m=+2273.798394543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-454000 -n skaffold-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-454000 -n skaffold-454000: exit status 7 (62.825041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-454000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-454000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-454000
--- FAIL: TestSkaffold (12.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (585.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3947969142 start -p running-upgrade-033000 --memory=2200 --vm-driver=qemu2 
E0729 04:13:20.044435    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3947969142 start -p running-upgrade-033000 --memory=2200 --vm-driver=qemu2 : (49.759747584s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-033000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0729 04:15:14.739176    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-033000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m20.840125875s)

                                                
                                                
-- stdout --
	* [running-upgrade-033000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-033000" primary control-plane node in "running-upgrade-033000" cluster
	* Updating the running qemu2 "running-upgrade-033000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:13:46.723636    3891 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:13:46.723775    3891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:13:46.723778    3891 out.go:304] Setting ErrFile to fd 2...
	I0729 04:13:46.723781    3891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:13:46.723911    3891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:13:46.724959    3891 out.go:298] Setting JSON to false
	I0729 04:13:46.741901    3891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2589,"bootTime":1722249037,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:13:46.741969    3891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:13:46.746354    3891 out.go:177] * [running-upgrade-033000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:13:46.753398    3891 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:13:46.753462    3891 notify.go:220] Checking for updates...
	I0729 04:13:46.759371    3891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:13:46.766273    3891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:13:46.769344    3891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:13:46.772348    3891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:13:46.775316    3891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:13:46.778557    3891 config.go:182] Loaded profile config "running-upgrade-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:13:46.782365    3891 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 04:13:46.785343    3891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:13:46.789336    3891 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:13:46.796348    3891 start.go:297] selected driver: qemu2
	I0729 04:13:46.796357    3891 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50299 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:13:46.796406    3891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:13:46.798715    3891 cni.go:84] Creating CNI manager for ""
	I0729 04:13:46.798736    3891 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:13:46.798765    3891 start.go:340] cluster config:
	{Name:running-upgrade-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50299 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:13:46.798815    3891 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:13:46.806373    3891 out.go:177] * Starting "running-upgrade-033000" primary control-plane node in "running-upgrade-033000" cluster
	I0729 04:13:46.810349    3891 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:13:46.810372    3891 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 04:13:46.810378    3891 cache.go:56] Caching tarball of preloaded images
	I0729 04:13:46.810442    3891 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:13:46.810448    3891 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 04:13:46.810502    3891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/config.json ...
	I0729 04:13:46.810846    3891 start.go:360] acquireMachinesLock for running-upgrade-033000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:13:46.810884    3891 start.go:364] duration metric: took 31.792µs to acquireMachinesLock for "running-upgrade-033000"
	I0729 04:13:46.810894    3891 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:13:46.810904    3891 fix.go:54] fixHost starting: 
	I0729 04:13:46.811552    3891 fix.go:112] recreateIfNeeded on running-upgrade-033000: state=Running err=<nil>
	W0729 04:13:46.811561    3891 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:13:46.815373    3891 out.go:177] * Updating the running qemu2 "running-upgrade-033000" VM ...
	I0729 04:13:46.823372    3891 machine.go:94] provisionDockerMachine start ...
	I0729 04:13:46.823401    3891 main.go:141] libmachine: Using SSH client type: native
	I0729 04:13:46.823497    3891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10042aa10] 0x10042d270 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0729 04:13:46.823501    3891 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 04:13:46.897027    3891 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-033000
	
	I0729 04:13:46.897039    3891 buildroot.go:166] provisioning hostname "running-upgrade-033000"
	I0729 04:13:46.897079    3891 main.go:141] libmachine: Using SSH client type: native
	I0729 04:13:46.897191    3891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10042aa10] 0x10042d270 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0729 04:13:46.897197    3891 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-033000 && echo "running-upgrade-033000" | sudo tee /etc/hostname
	I0729 04:13:46.970868    3891 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-033000
	
	I0729 04:13:46.970918    3891 main.go:141] libmachine: Using SSH client type: native
	I0729 04:13:46.971038    3891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10042aa10] 0x10042d270 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0729 04:13:46.971046    3891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-033000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-033000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-033000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 04:13:47.041694    3891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:13:47.041706    3891 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19336-945/.minikube CaCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19336-945/.minikube}
	I0729 04:13:47.041712    3891 buildroot.go:174] setting up certificates
	I0729 04:13:47.041716    3891 provision.go:84] configureAuth start
	I0729 04:13:47.041723    3891 provision.go:143] copyHostCerts
	I0729 04:13:47.041789    3891 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem, removing ...
	I0729 04:13:47.041794    3891 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem
	I0729 04:13:47.041921    3891 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem (1078 bytes)
	I0729 04:13:47.042098    3891 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem, removing ...
	I0729 04:13:47.042101    3891 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem
	I0729 04:13:47.042151    3891 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem (1123 bytes)
	I0729 04:13:47.042262    3891 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem, removing ...
	I0729 04:13:47.042265    3891 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem
	I0729 04:13:47.042315    3891 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem (1679 bytes)
	I0729 04:13:47.042405    3891 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-033000 san=[127.0.0.1 localhost minikube running-upgrade-033000]
	I0729 04:13:47.117657    3891 provision.go:177] copyRemoteCerts
	I0729 04:13:47.117711    3891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 04:13:47.117731    3891 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/running-upgrade-033000/id_rsa Username:docker}
	I0729 04:13:47.163532    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 04:13:47.170428    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 04:13:47.177565    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 04:13:47.184356    3891 provision.go:87] duration metric: took 142.64ms to configureAuth
	I0729 04:13:47.184366    3891 buildroot.go:189] setting minikube options for container-runtime
	I0729 04:13:47.184475    3891 config.go:182] Loaded profile config "running-upgrade-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:13:47.184507    3891 main.go:141] libmachine: Using SSH client type: native
	I0729 04:13:47.184597    3891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10042aa10] 0x10042d270 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0729 04:13:47.184604    3891 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 04:13:47.255119    3891 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 04:13:47.255128    3891 buildroot.go:70] root file system type: tmpfs
	I0729 04:13:47.255178    3891 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 04:13:47.255219    3891 main.go:141] libmachine: Using SSH client type: native
	I0729 04:13:47.255318    3891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10042aa10] 0x10042d270 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0729 04:13:47.255352    3891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 04:13:47.332010    3891 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 04:13:47.332072    3891 main.go:141] libmachine: Using SSH client type: native
	I0729 04:13:47.332188    3891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10042aa10] 0x10042d270 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0729 04:13:47.332198    3891 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 04:13:47.406683    3891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:13:47.406696    3891 machine.go:97] duration metric: took 583.336333ms to provisionDockerMachine
	I0729 04:13:47.406702    3891 start.go:293] postStartSetup for "running-upgrade-033000" (driver="qemu2")
	I0729 04:13:47.406709    3891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 04:13:47.406763    3891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 04:13:47.406771    3891 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/running-upgrade-033000/id_rsa Username:docker}
	I0729 04:13:47.443683    3891 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 04:13:47.444862    3891 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 04:13:47.444869    3891 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19336-945/.minikube/addons for local assets ...
	I0729 04:13:47.444947    3891 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19336-945/.minikube/files for local assets ...
	I0729 04:13:47.445071    3891 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem -> 13972.pem in /etc/ssl/certs
	I0729 04:13:47.445208    3891 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 04:13:47.447874    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem --> /etc/ssl/certs/13972.pem (1708 bytes)
	I0729 04:13:47.454529    3891 start.go:296] duration metric: took 47.823709ms for postStartSetup
	I0729 04:13:47.454542    3891 fix.go:56] duration metric: took 643.663ms for fixHost
	I0729 04:13:47.454574    3891 main.go:141] libmachine: Using SSH client type: native
	I0729 04:13:47.454669    3891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10042aa10] 0x10042d270 <nil>  [] 0s} localhost 50267 <nil> <nil>}
	I0729 04:13:47.454676    3891 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 04:13:47.526644    3891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722251627.462827930
	
	I0729 04:13:47.526653    3891 fix.go:216] guest clock: 1722251627.462827930
	I0729 04:13:47.526657    3891 fix.go:229] Guest: 2024-07-29 04:13:47.46282793 -0700 PDT Remote: 2024-07-29 04:13:47.454544 -0700 PDT m=+0.750383376 (delta=8.28393ms)
	I0729 04:13:47.526674    3891 fix.go:200] guest clock delta is within tolerance: 8.28393ms
	I0729 04:13:47.526677    3891 start.go:83] releasing machines lock for "running-upgrade-033000", held for 715.811ms
	I0729 04:13:47.526739    3891 ssh_runner.go:195] Run: cat /version.json
	I0729 04:13:47.526749    3891 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/running-upgrade-033000/id_rsa Username:docker}
	I0729 04:13:47.526739    3891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 04:13:47.526780    3891 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/running-upgrade-033000/id_rsa Username:docker}
	W0729 04:13:47.527295    3891 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50267: connect: connection refused
	I0729 04:13:47.527315    3891 retry.go:31] will retry after 243.36698ms: dial tcp [::1]:50267: connect: connection refused
	W0729 04:13:47.816132    3891 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 04:13:47.816286    3891 ssh_runner.go:195] Run: systemctl --version
	I0729 04:13:47.819273    3891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 04:13:47.822052    3891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 04:13:47.822091    3891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 04:13:47.826480    3891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 04:13:47.832758    3891 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 04:13:47.832768    3891 start.go:495] detecting cgroup driver to use...
	I0729 04:13:47.832856    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:13:47.839463    3891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 04:13:47.842981    3891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 04:13:47.846501    3891 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 04:13:47.846531    3891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 04:13:47.850214    3891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:13:47.853585    3891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 04:13:47.856640    3891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:13:47.859403    3891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 04:13:47.862310    3891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 04:13:47.865351    3891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 04:13:47.868586    3891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 04:13:47.871494    3891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 04:13:47.874473    3891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 04:13:47.877689    3891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:13:47.961627    3891 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 04:13:47.968556    3891 start.go:495] detecting cgroup driver to use...
	I0729 04:13:47.968628    3891 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 04:13:47.974231    3891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:13:47.979688    3891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 04:13:47.988104    3891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:13:47.992630    3891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:13:47.997758    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:13:48.002871    3891 ssh_runner.go:195] Run: which cri-dockerd
	I0729 04:13:48.004145    3891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 04:13:48.006808    3891 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 04:13:48.011803    3891 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 04:13:48.101077    3891 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 04:13:48.178492    3891 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 04:13:48.178553    3891 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 04:13:48.184092    3891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:13:48.252933    3891 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:13:49.662871    3891 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.409968334s)
	I0729 04:13:49.662945    3891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 04:13:49.667803    3891 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 04:13:49.674337    3891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:13:49.678901    3891 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 04:13:49.752230    3891 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 04:13:49.819435    3891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:13:49.893381    3891 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 04:13:49.899207    3891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:13:49.903966    3891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:13:49.968809    3891 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 04:13:50.006224    3891 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 04:13:50.006307    3891 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 04:13:50.008539    3891 start.go:563] Will wait 60s for crictl version
	I0729 04:13:50.008579    3891 ssh_runner.go:195] Run: which crictl
	I0729 04:13:50.010666    3891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 04:13:50.022810    3891 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 04:13:50.022893    3891 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:13:50.035931    3891 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:13:50.056076    3891 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 04:13:50.056199    3891 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 04:13:50.057613    3891 kubeadm.go:883] updating cluster {Name:running-upgrade-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50299 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 04:13:50.057652    3891 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:13:50.057689    3891 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:13:50.067833    3891 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:13:50.067841    3891 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:13:50.067892    3891 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:13:50.070912    3891 ssh_runner.go:195] Run: which lz4
	I0729 04:13:50.072205    3891 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 04:13:50.073407    3891 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 04:13:50.073419    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 04:13:50.984720    3891 docker.go:649] duration metric: took 912.573ms to copy over tarball
	I0729 04:13:50.984782    3891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 04:13:52.109603    3891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.124844541s)
	I0729 04:13:52.109616    3891 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 04:13:52.125490    3891 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:13:52.128480    3891 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 04:13:52.133585    3891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:13:52.198314    3891 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:13:53.394062    3891 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.195762959s)
	I0729 04:13:53.394171    3891 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:13:53.409154    3891 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:13:53.409162    3891 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:13:53.409167    3891 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 04:13:53.414730    3891 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:13:53.416535    3891 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:13:53.418698    3891 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:13:53.418737    3891 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:13:53.420078    3891 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:13:53.420372    3891 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:13:53.421109    3891 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:13:53.421335    3891 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:13:53.422866    3891 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:13:53.423546    3891 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:13:53.423974    3891 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:13:53.424081    3891 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:13:53.425211    3891 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:13:53.425430    3891 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:13:53.426269    3891 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:13:53.427555    3891 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:13:53.841155    3891 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:13:53.860408    3891 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 04:13:53.860440    3891 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:13:53.860498    3891 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:13:53.865480    3891 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:13:53.869869    3891 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:13:53.876313    3891 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:13:53.879623    3891 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 04:13:53.887038    3891 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 04:13:53.887059    3891 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:13:53.887117    3891 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:13:53.894664    3891 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 04:13:53.895027    3891 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 04:13:53.895043    3891 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:13:53.895067    3891 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:13:53.901208    3891 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 04:13:53.901230    3891 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:13:53.901281    3891 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0729 04:13:53.911182    3891 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:13:53.911317    3891 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:13:53.913470    3891 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 04:13:53.918492    3891 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 04:13:53.923074    3891 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 04:13:53.923086    3891 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 04:13:53.923092    3891 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:13:53.923133    3891 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 04:13:53.923217    3891 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 04:13:53.933886    3891 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 04:13:53.933913    3891 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:13:53.933940    3891 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 04:13:53.933950    3891 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 04:13:53.933967    3891 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:13:53.933969    3891 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 04:13:53.947592    3891 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:13:53.954821    3891 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:13:53.954937    3891 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:13:53.956020    3891 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 04:13:53.956093    3891 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 04:13:53.957471    3891 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 04:13:53.957482    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 04:13:53.957675    3891 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 04:13:53.957686    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 04:13:53.979534    3891 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 04:13:53.979547    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 04:13:54.031616    3891 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 04:13:54.031633    3891 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:13:54.031643    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0729 04:13:54.036059    3891 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:13:54.036171    3891 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:13:54.077750    3891 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 04:13:54.077789    3891 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 04:13:54.077808    3891 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:13:54.077870    3891 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:13:54.647157    3891 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 04:13:54.647467    3891 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:13:54.650856    3891 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 04:13:54.650898    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 04:13:54.706444    3891 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:13:54.706459    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 04:13:54.944930    3891 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 04:13:54.944967    3891 cache_images.go:92] duration metric: took 1.535844667s to LoadCachedImages
	W0729 04:13:54.945008    3891 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 04:13:54.945014    3891 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 04:13:54.945073    3891 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-033000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 04:13:54.945134    3891 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 04:13:54.958536    3891 cni.go:84] Creating CNI manager for ""
	I0729 04:13:54.958547    3891 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:13:54.958552    3891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 04:13:54.958560    3891 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-033000 NodeName:running-upgrade-033000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 04:13:54.958623    3891 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-033000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 04:13:54.958676    3891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 04:13:54.961800    3891 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 04:13:54.961829    3891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 04:13:54.964818    3891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 04:13:54.969673    3891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 04:13:54.974620    3891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 04:13:54.980221    3891 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 04:13:54.981712    3891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:13:55.056446    3891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:13:55.061583    3891 certs.go:68] Setting up /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000 for IP: 10.0.2.15
	I0729 04:13:55.061589    3891 certs.go:194] generating shared ca certs ...
	I0729 04:13:55.061597    3891 certs.go:226] acquiring lock for ca certs: {Name:mk0965f831896eb9b1f5dee9ac66a2ece4b593d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:13:55.061755    3891 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19336-945/.minikube/ca.key
	I0729 04:13:55.061807    3891 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.key
	I0729 04:13:55.061813    3891 certs.go:256] generating profile certs ...
	I0729 04:13:55.061873    3891 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/client.key
	I0729 04:13:55.061891    3891 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.key.13211bd2
	I0729 04:13:55.061901    3891 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.crt.13211bd2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 04:13:55.156636    3891 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.crt.13211bd2 ...
	I0729 04:13:55.156651    3891 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.crt.13211bd2: {Name:mk68a3e7e0e266ee02f7fa7fa347f9f3447e72fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:13:55.157181    3891 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.key.13211bd2 ...
	I0729 04:13:55.157187    3891 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.key.13211bd2: {Name:mkf904904280e4e70ef4c39382cfc9cad6066a61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:13:55.157330    3891 certs.go:381] copying /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.crt.13211bd2 -> /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.crt
	I0729 04:13:55.157471    3891 certs.go:385] copying /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.key.13211bd2 -> /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.key
	I0729 04:13:55.157629    3891 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/proxy-client.key
	I0729 04:13:55.157758    3891 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397.pem (1338 bytes)
	W0729 04:13:55.157786    3891 certs.go:480] ignoring /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397_empty.pem, impossibly tiny 0 bytes
	I0729 04:13:55.157791    3891 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 04:13:55.157811    3891 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem (1078 bytes)
	I0729 04:13:55.157828    3891 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem (1123 bytes)
	I0729 04:13:55.157845    3891 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem (1679 bytes)
	I0729 04:13:55.157883    3891 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem (1708 bytes)
	I0729 04:13:55.158183    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 04:13:55.165839    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 04:13:55.173695    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 04:13:55.181333    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 04:13:55.188727    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 04:13:55.195395    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 04:13:55.202360    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 04:13:55.209971    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 04:13:55.217648    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 04:13:55.224607    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397.pem --> /usr/share/ca-certificates/1397.pem (1338 bytes)
	I0729 04:13:55.231372    3891 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem --> /usr/share/ca-certificates/13972.pem (1708 bytes)
	I0729 04:13:55.238144    3891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 04:13:55.243322    3891 ssh_runner.go:195] Run: openssl version
	I0729 04:13:55.245083    3891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13972.pem && ln -fs /usr/share/ca-certificates/13972.pem /etc/ssl/certs/13972.pem"
	I0729 04:13:55.248099    3891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13972.pem
	I0729 04:13:55.249476    3891 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:42 /usr/share/ca-certificates/13972.pem
	I0729 04:13:55.249495    3891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13972.pem
	I0729 04:13:55.251287    3891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13972.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 04:13:55.254203    3891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 04:13:55.257571    3891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:13:55.259068    3891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:35 /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:13:55.259086    3891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:13:55.260888    3891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 04:13:55.263537    3891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1397.pem && ln -fs /usr/share/ca-certificates/1397.pem /etc/ssl/certs/1397.pem"
	I0729 04:13:55.266594    3891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1397.pem
	I0729 04:13:55.268111    3891 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:42 /usr/share/ca-certificates/1397.pem
	I0729 04:13:55.268131    3891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1397.pem
	I0729 04:13:55.269848    3891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1397.pem /etc/ssl/certs/51391683.0"
	I0729 04:13:55.273047    3891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 04:13:55.274645    3891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 04:13:55.276563    3891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 04:13:55.278334    3891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 04:13:55.280149    3891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 04:13:55.281957    3891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 04:13:55.283946    3891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 04:13:55.285749    3891 kubeadm.go:392] StartCluster: {Name:running-upgrade-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50299 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:13:55.285816    3891 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:13:55.296228    3891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 04:13:55.299578    3891 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 04:13:55.299583    3891 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 04:13:55.299604    3891 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 04:13:55.302504    3891 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:13:55.302746    3891 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-033000" does not appear in /Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:13:55.302792    3891 kubeconfig.go:62] /Users/jenkins/minikube-integration/19336-945/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-033000" cluster setting kubeconfig missing "running-upgrade-033000" context setting]
	I0729 04:13:55.302930    3891 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/kubeconfig: {Name:mkc1463454d977493e341af62af023d087f8e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:13:55.303605    3891 kapi.go:59] client config for running-upgrade-033000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/client.key", CAFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1017c0080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:13:55.303949    3891 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 04:13:55.306781    3891 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-033000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 04:13:55.306786    3891 kubeadm.go:1160] stopping kube-system containers ...
	I0729 04:13:55.306827    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:13:55.317527    3891 docker.go:483] Stopping containers: [1862cd81e2b8 cfcc693e4027 ea8ef8baad9d ca58595295a8 829617c2dfd3 738ae555cc7d a1bd11a4a42b 59a9ce633b84 2b705fa1d0ca 296a044c8573 784fa86f27ad 016dcfd8857e 915f09020f86]
	I0729 04:13:55.317596    3891 ssh_runner.go:195] Run: docker stop 1862cd81e2b8 cfcc693e4027 ea8ef8baad9d ca58595295a8 829617c2dfd3 738ae555cc7d a1bd11a4a42b 59a9ce633b84 2b705fa1d0ca 296a044c8573 784fa86f27ad 016dcfd8857e 915f09020f86
	I0729 04:13:55.651764    3891 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 04:13:55.737892    3891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:13:55.741673    3891 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jul 29 11:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 29 11:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 29 11:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 29 11:13 /etc/kubernetes/scheduler.conf
	
	I0729 04:13:55.741708    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf
	I0729 04:13:55.744543    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:13:55.744567    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:13:55.747339    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf
	I0729 04:13:55.750089    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:13:55.750113    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:13:55.753145    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf
	I0729 04:13:55.756253    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:13:55.756277    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:13:55.758900    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf
	I0729 04:13:55.762052    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:13:55.762074    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:13:55.765388    3891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:13:55.768308    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:13:55.797805    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:13:56.320201    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:13:56.514317    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:13:56.546293    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:13:56.567813    3891 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:13:56.567885    3891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:13:57.070153    3891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:13:57.569944    3891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:13:57.579547    3891 api_server.go:72] duration metric: took 1.0117685s to wait for apiserver process to appear ...
	I0729 04:13:57.579556    3891 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:13:57.579566    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:02.580773    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:02.580823    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:07.581423    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:07.581494    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:12.582058    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:12.582111    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:17.582547    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:17.582666    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:22.583764    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:22.583816    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:27.584835    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:27.584932    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:32.585561    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:32.585640    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:37.587373    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:37.587446    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:42.589728    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:42.589766    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:47.592018    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:47.592102    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:52.593911    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:52.593988    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:14:57.595619    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:14:57.595994    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:14:57.628132    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:14:57.628263    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:14:57.647836    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:14:57.647944    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:14:57.664512    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:14:57.664601    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:14:57.675852    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:14:57.675921    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:14:57.686478    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:14:57.686550    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:14:57.696535    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:14:57.696608    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:14:57.707192    3891 logs.go:276] 0 containers: []
	W0729 04:14:57.707203    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:14:57.707263    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:14:57.718284    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:14:57.718302    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:14:57.718308    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:14:57.732374    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:14:57.732389    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:14:57.744096    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:14:57.744105    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:14:57.759115    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:14:57.759126    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:14:57.770180    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:14:57.770193    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:14:57.796486    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:14:57.796496    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:14:57.809294    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:14:57.809308    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:14:57.846540    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:14:57.846549    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:14:57.868840    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:14:57.868852    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:14:57.879680    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:14:57.879691    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:14:57.891262    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:14:57.891271    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:14:57.908410    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:14:57.908423    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:14:57.913423    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:14:57.913432    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:14:57.983159    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:14:57.983173    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:14:57.997489    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:14:57.997499    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:14:58.008308    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:14:58.008323    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:15:00.519628    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:15:05.521905    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:15:05.522333    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:15:05.566059    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:15:05.566194    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:15:05.589043    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:15:05.589142    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:15:05.602554    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:15:05.602618    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:15:05.614365    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:15:05.614433    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:15:05.626136    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:15:05.626197    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:15:05.636951    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:15:05.637022    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:15:05.647320    3891 logs.go:276] 0 containers: []
	W0729 04:15:05.647331    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:15:05.647391    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:15:05.658488    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:15:05.658506    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:15:05.658513    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:15:05.662628    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:15:05.662635    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:15:05.697244    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:15:05.697257    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:15:05.711338    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:15:05.711348    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:15:05.722739    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:15:05.722752    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:15:05.736895    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:15:05.736908    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:15:05.759217    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:15:05.759230    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:15:05.770549    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:15:05.770562    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:15:05.782766    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:15:05.782779    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:15:05.800153    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:15:05.800163    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:15:05.835626    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:15:05.835636    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:15:05.849906    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:15:05.849918    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:15:05.868645    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:15:05.868655    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:15:05.880324    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:15:05.880334    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:15:05.891954    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:15:05.891967    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:15:05.916562    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:15:05.916570    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:15:08.431175    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:15:13.433530    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:15:13.433913    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:15:13.469543    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:15:13.469675    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:15:13.492798    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:15:13.492900    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:15:13.506598    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:15:13.506672    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:15:13.518739    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:15:13.518808    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:15:13.531031    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:15:13.531107    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:15:13.542283    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:15:13.542355    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:15:13.557936    3891 logs.go:276] 0 containers: []
	W0729 04:15:13.557946    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:15:13.558004    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:15:13.568770    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:15:13.568788    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:15:13.568794    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:15:13.604732    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:15:13.604742    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:15:13.618205    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:15:13.618216    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:15:13.630572    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:15:13.630582    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:15:13.642243    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:15:13.642252    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:15:13.668832    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:15:13.668839    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:15:13.687929    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:15:13.687941    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:15:13.698950    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:15:13.698962    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:15:13.712839    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:15:13.712850    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:15:13.724608    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:15:13.724621    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:15:13.742577    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:15:13.742586    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:15:13.754213    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:15:13.754226    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:15:13.759826    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:15:13.759834    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:15:13.794155    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:15:13.794164    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:15:13.809002    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:15:13.809011    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:15:13.820227    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:15:13.820242    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:15:16.334553    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:15:21.337316    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:15:21.337709    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:15:21.375491    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:15:21.375627    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:15:21.395635    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:15:21.395722    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:15:21.409480    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:15:21.409560    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:15:21.425678    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:15:21.425753    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:15:21.446155    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:15:21.446229    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:15:21.457497    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:15:21.457565    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:15:21.467892    3891 logs.go:276] 0 containers: []
	W0729 04:15:21.467904    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:15:21.467968    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:15:21.479025    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:15:21.479045    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:15:21.479060    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:15:21.491917    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:15:21.491928    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:15:21.511248    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:15:21.511264    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:15:21.523763    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:15:21.523774    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:15:21.548220    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:15:21.548228    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:15:21.581898    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:15:21.581910    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:15:21.593898    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:15:21.593911    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:15:21.610998    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:15:21.611008    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:15:21.622445    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:15:21.622458    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:15:21.637502    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:15:21.637514    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:15:21.649022    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:15:21.649033    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:15:21.653410    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:15:21.653416    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:15:21.667703    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:15:21.667714    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:15:21.682506    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:15:21.682517    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:15:21.717576    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:15:21.717593    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:15:21.731871    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:15:21.731882    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:15:24.244943    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:15:29.247653    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:15:29.247940    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:15:29.274829    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:15:29.274957    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:15:29.292828    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:15:29.292918    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:15:29.306740    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:15:29.306809    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:15:29.318353    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:15:29.318415    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:15:29.329018    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:15:29.329088    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:15:29.339601    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:15:29.339661    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:15:29.350388    3891 logs.go:276] 0 containers: []
	W0729 04:15:29.350402    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:15:29.350463    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:15:29.362141    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:15:29.362156    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:15:29.362162    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:15:29.398252    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:15:29.398259    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:15:29.433986    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:15:29.433997    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:15:29.447305    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:15:29.447315    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:15:29.459032    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:15:29.459042    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:15:29.473293    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:15:29.473303    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:15:29.487379    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:15:29.487388    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:15:29.502093    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:15:29.502104    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:15:29.516180    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:15:29.516191    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:15:29.529174    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:15:29.529186    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:15:29.541898    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:15:29.541908    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:15:29.546007    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:15:29.546014    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:15:29.564617    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:15:29.564627    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:15:29.575871    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:15:29.575884    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:15:29.593302    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:15:29.593312    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:15:29.605772    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:15:29.605783    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:15:32.131781    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:15:37.134634    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:15:37.135142    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:15:37.175121    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:15:37.175279    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:15:37.196490    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:15:37.196592    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:15:37.211894    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:15:37.211959    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:15:37.224648    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:15:37.224708    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:15:37.238538    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:15:37.238611    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:15:37.252530    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:15:37.252601    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:15:37.262668    3891 logs.go:276] 0 containers: []
	W0729 04:15:37.262680    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:15:37.262738    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:15:37.272987    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:15:37.273005    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:15:37.273011    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:15:37.277216    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:15:37.277225    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:15:37.296017    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:15:37.296027    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:15:37.308206    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:15:37.308216    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:15:37.319668    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:15:37.319679    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:15:37.357972    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:15:37.357984    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:15:37.369836    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:15:37.369848    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:15:37.381177    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:15:37.381191    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:15:37.420211    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:15:37.420223    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:15:37.431529    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:15:37.431542    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:15:37.443421    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:15:37.443432    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:15:37.467236    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:15:37.467247    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:15:37.493679    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:15:37.493686    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:15:37.508502    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:15:37.508513    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:15:37.522350    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:15:37.522362    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:15:37.536811    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:15:37.536820    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:15:40.051403    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:15:45.054202    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:15:45.054618    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:15:45.097064    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:15:45.097217    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:15:45.117629    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:15:45.117739    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:15:45.134414    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:15:45.134489    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:15:45.146409    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:15:45.146477    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:15:45.157747    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:15:45.157815    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:15:45.168072    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:15:45.168133    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:15:45.178530    3891 logs.go:276] 0 containers: []
	W0729 04:15:45.178545    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:15:45.178607    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:15:45.189329    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:15:45.189346    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:15:45.189351    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:15:45.201241    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:15:45.201252    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:15:45.212981    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:15:45.212992    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:15:45.226707    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:15:45.226721    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:15:45.246560    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:15:45.246572    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:15:45.257617    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:15:45.257631    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:15:45.268926    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:15:45.268939    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:15:45.303334    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:15:45.303343    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:15:45.323055    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:15:45.323068    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:15:45.336940    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:15:45.336954    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:15:45.351114    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:15:45.351124    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:15:45.362728    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:15:45.362741    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:15:45.387271    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:15:45.387281    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:15:45.423469    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:15:45.423479    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:15:45.427707    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:15:45.427716    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:15:45.444588    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:15:45.444598    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:15:47.959120    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:15:52.961812    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:15:52.962210    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:15:53.004146    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:15:53.004281    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:15:53.025472    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:15:53.025587    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:15:53.040042    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:15:53.040116    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:15:53.052533    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:15:53.052629    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:15:53.063356    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:15:53.063426    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:15:53.073944    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:15:53.074016    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:15:53.084027    3891 logs.go:276] 0 containers: []
	W0729 04:15:53.084040    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:15:53.084098    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:15:53.094427    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:15:53.094442    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:15:53.094448    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:15:53.113878    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:15:53.113888    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:15:53.124957    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:15:53.124968    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:15:53.162321    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:15:53.162331    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:15:53.176655    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:15:53.176665    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:15:53.191023    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:15:53.191033    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:15:53.195913    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:15:53.195922    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:15:53.229738    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:15:53.229751    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:15:53.241702    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:15:53.241711    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:15:53.252754    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:15:53.252767    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:15:53.264609    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:15:53.264618    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:15:53.281912    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:15:53.281921    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:15:53.292916    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:15:53.292928    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:15:53.318500    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:15:53.318510    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:15:53.330100    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:15:53.330109    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:15:53.344662    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:15:53.344677    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:15:55.858452    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:16:00.860167    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:16:00.860353    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:16:00.875289    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:16:00.875376    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:16:00.887564    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:16:00.887643    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:16:00.898217    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:16:00.898287    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:16:00.908385    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:16:00.908453    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:16:00.918685    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:16:00.918754    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:16:00.929427    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:16:00.929491    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:16:00.939814    3891 logs.go:276] 0 containers: []
	W0729 04:16:00.939830    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:16:00.939880    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:16:00.950115    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:16:00.950130    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:16:00.950134    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:16:00.961456    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:16:00.961466    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:16:00.996870    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:16:00.996878    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:16:01.000942    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:16:01.000951    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:16:01.015682    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:16:01.015694    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:16:01.026952    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:16:01.026965    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:16:01.038120    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:16:01.038138    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:16:01.051677    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:16:01.051688    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:16:01.063025    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:16:01.063035    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:16:01.088035    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:16:01.088046    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:16:01.123594    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:16:01.123607    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:16:01.141220    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:16:01.141233    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:16:01.152154    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:16:01.152164    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:16:01.163655    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:16:01.163668    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:16:01.175063    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:16:01.175075    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:16:01.193817    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:16:01.193825    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:16:03.713213    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:16:08.715380    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:16:08.715805    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:16:08.757853    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:16:08.757985    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:16:08.779225    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:16:08.779319    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:16:08.795265    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:16:08.795327    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:16:08.807956    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:16:08.808017    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:16:08.823198    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:16:08.823247    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:16:08.833912    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:16:08.833983    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:16:08.845004    3891 logs.go:276] 0 containers: []
	W0729 04:16:08.845018    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:16:08.845073    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:16:08.856602    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:16:08.856617    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:16:08.856623    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:16:08.902761    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:16:08.902773    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:16:08.918478    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:16:08.918490    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:16:08.930986    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:16:08.930996    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:16:08.942502    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:16:08.942510    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:16:08.967485    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:16:08.967495    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:16:09.005544    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:16:09.005552    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:16:09.023831    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:16:09.023856    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:16:09.034712    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:16:09.034727    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:16:09.051309    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:16:09.051320    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:16:09.056093    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:16:09.056099    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:16:09.067619    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:16:09.067631    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:16:09.079761    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:16:09.079771    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:16:09.097379    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:16:09.097389    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:16:09.110981    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:16:09.110990    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:16:09.128223    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:16:09.128232    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:16:11.642102    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:16:16.644844    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:16:16.645001    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:16:16.656285    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:16:16.656346    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:16:16.666497    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:16:16.666555    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:16:16.676750    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:16:16.676816    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:16:16.687639    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:16:16.687694    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:16:16.698165    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:16:16.698228    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:16:16.708778    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:16:16.708837    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:16:16.718949    3891 logs.go:276] 0 containers: []
	W0729 04:16:16.718959    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:16:16.719007    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:16:16.729687    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:16:16.729704    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:16:16.729710    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:16:16.733872    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:16:16.733878    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:16:16.747945    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:16:16.747957    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:16:16.759397    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:16:16.759414    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:16:16.776548    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:16:16.776561    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:16:16.787971    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:16:16.787984    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:16:16.800078    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:16:16.800087    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:16:16.837432    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:16:16.837440    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:16:16.873173    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:16:16.873183    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:16:16.886951    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:16:16.886963    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:16:16.905532    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:16:16.905541    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:16:16.922190    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:16:16.922203    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:16:16.935515    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:16:16.935527    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:16:16.946882    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:16:16.946893    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:16:16.958090    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:16:16.958100    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:16:16.977575    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:16:16.977584    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:16:19.504886    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:16:24.507331    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:16:24.507468    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:16:24.518877    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:16:24.518952    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:16:24.530514    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:16:24.530593    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:16:24.541585    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:16:24.541654    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:16:24.552381    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:16:24.552453    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:16:24.562991    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:16:24.563060    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:16:24.573527    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:16:24.573597    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:16:24.583421    3891 logs.go:276] 0 containers: []
	W0729 04:16:24.583435    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:16:24.583489    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:16:24.594285    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:16:24.594302    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:16:24.594309    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:16:24.605399    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:16:24.605410    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:16:24.631139    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:16:24.631147    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:16:24.635989    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:16:24.635995    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:16:24.650178    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:16:24.650187    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:16:24.664290    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:16:24.664299    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:16:24.679631    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:16:24.679642    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:16:24.691788    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:16:24.691799    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:16:24.703329    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:16:24.703341    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:16:24.715200    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:16:24.715210    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:16:24.733226    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:16:24.733236    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:16:24.744429    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:16:24.744443    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:16:24.780936    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:16:24.780943    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:16:24.815620    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:16:24.815630    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:16:24.835065    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:16:24.835073    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:16:24.846656    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:16:24.846668    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:16:27.360812    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:16:32.362926    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:16:32.363166    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:16:32.386375    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:16:32.386479    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:16:32.401850    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:16:32.401926    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:16:32.414198    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:16:32.414269    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:16:32.425115    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:16:32.425183    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:16:32.435924    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:16:32.435986    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:16:32.446605    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:16:32.446676    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:16:32.456864    3891 logs.go:276] 0 containers: []
	W0729 04:16:32.456875    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:16:32.456926    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:16:32.466899    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:16:32.466916    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:16:32.466921    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:16:32.478254    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:16:32.478264    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:16:32.491003    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:16:32.491017    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:16:32.511807    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:16:32.511820    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:16:32.536892    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:16:32.536905    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:16:32.574260    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:16:32.574271    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:16:32.579120    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:16:32.579126    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:16:32.592749    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:16:32.592762    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:16:32.604223    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:16:32.604238    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:16:32.639295    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:16:32.639308    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:16:32.661908    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:16:32.661918    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:16:32.676299    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:16:32.676309    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:16:32.687608    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:16:32.687619    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:16:32.699138    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:16:32.699150    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:16:32.713170    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:16:32.713179    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:16:32.724271    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:16:32.724283    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:16:35.237645    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:16:40.240403    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:16:40.240549    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:16:40.252880    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:16:40.252960    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:16:40.264432    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:16:40.264505    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:16:40.275754    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:16:40.275826    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:16:40.287050    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:16:40.287128    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:16:40.297961    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:16:40.298036    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:16:40.308690    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:16:40.308757    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:16:40.319517    3891 logs.go:276] 0 containers: []
	W0729 04:16:40.319533    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:16:40.319593    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:16:40.331013    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:16:40.331033    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:16:40.331039    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:16:40.342895    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:16:40.342909    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:16:40.363990    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:16:40.364004    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:16:40.375641    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:16:40.375657    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:16:40.392503    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:16:40.392518    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:16:40.410675    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:16:40.410687    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:16:40.422977    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:16:40.422989    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:16:40.448493    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:16:40.448505    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:16:40.460690    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:16:40.460704    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:16:40.465092    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:16:40.465099    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:16:40.487401    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:16:40.487412    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:16:40.498998    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:16:40.499013    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:16:40.537262    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:16:40.537280    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:16:40.554082    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:16:40.554093    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:16:40.607465    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:16:40.607479    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:16:40.628911    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:16:40.628926    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:16:43.145670    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:16:48.147888    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:16:48.148360    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:16:48.197952    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:16:48.198095    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:16:48.217980    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:16:48.218080    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:16:48.232400    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:16:48.232480    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:16:48.244845    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:16:48.244916    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:16:48.255488    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:16:48.255561    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:16:48.266441    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:16:48.266507    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:16:48.277458    3891 logs.go:276] 0 containers: []
	W0729 04:16:48.277470    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:16:48.277530    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:16:48.288125    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:16:48.288142    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:16:48.288148    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:16:48.312768    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:16:48.312778    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:16:48.323771    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:16:48.323784    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:16:48.358732    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:16:48.358744    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:16:48.372405    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:16:48.372418    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:16:48.384091    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:16:48.384104    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:16:48.405120    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:16:48.405130    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:16:48.409464    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:16:48.409473    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:16:48.432765    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:16:48.432774    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:16:48.446415    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:16:48.446427    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:16:48.458317    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:16:48.458327    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:16:48.477166    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:16:48.477175    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:16:48.495374    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:16:48.495387    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:16:48.509587    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:16:48.509599    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:16:48.547156    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:16:48.547166    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:16:48.559558    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:16:48.559569    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:16:51.073119    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:16:56.075215    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:16:56.075470    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:16:56.099408    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:16:56.099526    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:16:56.115801    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:16:56.115887    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:16:56.128466    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:16:56.128540    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:16:56.140285    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:16:56.140351    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:16:56.150531    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:16:56.150601    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:16:56.160827    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:16:56.160887    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:16:56.170741    3891 logs.go:276] 0 containers: []
	W0729 04:16:56.170755    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:16:56.170806    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:16:56.181695    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:16:56.181712    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:16:56.181718    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:16:56.204657    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:16:56.204667    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:16:56.216113    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:16:56.216125    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:16:56.241064    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:16:56.241078    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:16:56.293320    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:16:56.293335    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:16:56.310126    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:16:56.310136    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:16:56.322164    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:16:56.322177    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:16:56.358021    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:16:56.358028    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:16:56.380193    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:16:56.380203    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:16:56.395256    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:16:56.395267    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:16:56.399544    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:16:56.399553    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:16:56.413455    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:16:56.413469    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:16:56.424647    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:16:56.424659    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:16:56.436853    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:16:56.436865    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:16:56.450876    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:16:56.450889    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:16:56.461681    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:16:56.461693    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:16:58.974856    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:03.976971    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:03.977286    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:04.016439    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:04.016591    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:04.042861    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:04.042963    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:04.057101    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:04.057185    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:04.069383    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:04.069465    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:04.080304    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:04.080372    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:04.090989    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:04.091072    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:04.101841    3891 logs.go:276] 0 containers: []
	W0729 04:17:04.101855    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:04.101929    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:04.115254    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:04.115283    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:04.115289    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:04.153809    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:04.153820    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:04.169995    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:04.170007    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:04.192386    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:04.192395    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:04.203836    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:04.203850    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:04.226932    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:04.226940    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:04.238686    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:04.238695    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:04.253173    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:04.253181    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:04.264159    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:04.264169    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:04.275504    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:04.275517    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:04.287578    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:04.287590    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:04.302481    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:04.302491    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:04.314353    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:04.314365    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:04.319314    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:04.319321    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:04.355604    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:04.355616    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:04.375138    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:04.375149    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:06.891169    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:11.893380    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:11.893760    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:11.926394    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:11.926534    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:11.954600    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:11.954694    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:11.967675    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:11.967751    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:11.982406    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:11.982478    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:11.992988    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:11.993058    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:12.003440    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:12.003511    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:12.013802    3891 logs.go:276] 0 containers: []
	W0729 04:17:12.013812    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:12.013874    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:12.024698    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:12.024717    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:12.024723    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:12.038853    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:12.038865    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:12.050444    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:12.050458    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:12.062016    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:12.062030    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:12.083939    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:12.083950    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:12.120589    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:12.120598    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:12.142321    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:12.142334    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:12.161321    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:12.161333    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:12.173103    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:12.173114    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:12.196197    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:12.196207    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:12.207619    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:12.207634    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:12.211888    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:12.211895    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:12.222984    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:12.222995    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:12.235124    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:12.235135    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:12.271352    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:12.271363    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:12.285571    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:12.285584    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:14.799706    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:19.802311    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:19.802511    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:19.825842    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:19.825966    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:19.843191    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:19.843273    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:19.855235    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:19.855309    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:19.870329    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:19.870397    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:19.881120    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:19.881192    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:19.891607    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:19.891679    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:19.901494    3891 logs.go:276] 0 containers: []
	W0729 04:17:19.901509    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:19.901569    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:19.912490    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:19.912506    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:19.912512    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:19.923618    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:19.923628    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:19.946236    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:19.946243    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:19.961018    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:19.961030    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:19.972135    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:19.972147    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:19.983423    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:19.983433    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:20.000478    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:20.000487    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:20.011473    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:20.011485    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:20.048847    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:20.048858    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:20.053238    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:20.053247    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:20.087674    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:20.087686    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:20.101989    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:20.102004    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:20.113937    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:20.113950    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:20.134100    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:20.134111    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:20.148682    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:20.148693    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:20.160634    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:20.160644    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:22.673601    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:27.675632    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:27.675721    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:27.687570    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:27.687644    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:27.699644    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:27.699716    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:27.711357    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:27.711430    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:27.723299    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:27.723373    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:27.738845    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:27.738916    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:27.752525    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:27.752590    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:27.762919    3891 logs.go:276] 0 containers: []
	W0729 04:17:27.762931    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:27.762998    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:27.773225    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:27.773242    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:27.773247    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:27.788005    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:27.788016    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:27.800114    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:27.800126    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:27.818042    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:27.818054    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:27.858078    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:27.858087    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:27.873125    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:27.873136    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:27.885069    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:27.885080    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:27.896430    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:27.896447    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:27.908628    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:27.908642    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:27.932754    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:27.932764    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:27.937461    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:27.937470    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:27.959197    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:27.959213    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:27.973446    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:27.973457    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:27.985648    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:27.985661    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:28.022345    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:28.022355    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:28.034493    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:28.034505    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:30.548854    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:35.551081    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:35.551355    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:35.577529    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:35.577654    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:35.594002    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:35.594092    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:35.606918    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:35.606994    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:35.619040    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:35.619117    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:35.629667    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:35.629740    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:35.640341    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:35.640410    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:35.650196    3891 logs.go:276] 0 containers: []
	W0729 04:17:35.650211    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:35.650265    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:35.660295    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:35.660314    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:35.660320    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:35.679342    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:35.679353    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:35.691316    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:35.691330    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:35.703471    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:35.703486    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:35.715183    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:35.715197    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:35.719363    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:35.719369    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:35.752703    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:35.752713    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:35.767182    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:35.767195    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:35.781895    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:35.781912    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:35.793306    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:35.793319    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:35.810868    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:35.810878    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:35.825161    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:35.825172    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:35.836194    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:35.836205    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:35.847174    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:35.847183    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:35.885077    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:35.885087    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:35.909406    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:35.909414    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:38.422179    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:43.424693    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:43.424845    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:43.437270    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:43.437337    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:43.448325    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:43.448400    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:43.460108    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:43.460175    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:43.471260    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:43.471335    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:43.481541    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:43.481608    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:43.491929    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:43.491993    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:43.502050    3891 logs.go:276] 0 containers: []
	W0729 04:17:43.502062    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:43.502115    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:43.512371    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:43.512388    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:43.512394    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:43.549481    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:43.549499    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:43.569131    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:43.569144    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:43.581922    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:43.581933    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:43.595793    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:43.595803    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:43.610596    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:43.610607    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:43.622824    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:43.622838    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:43.635844    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:43.635857    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:43.659984    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:43.659995    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:43.664309    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:43.664317    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:43.675801    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:43.675813    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:43.687600    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:43.687611    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:43.705572    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:43.705582    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:43.718809    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:43.718820    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:43.755234    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:43.755245    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:43.769396    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:43.769407    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:46.296343    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:51.298428    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:51.298610    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:51.312645    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:51.312729    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:51.324408    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:51.324473    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:51.335449    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:51.335514    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:51.346144    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:51.346217    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:51.363407    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:51.363479    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:51.374059    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:51.374122    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:51.391416    3891 logs.go:276] 0 containers: []
	W0729 04:17:51.391428    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:51.391492    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:51.401645    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:51.401661    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:51.401666    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:51.413543    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:51.413553    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:51.449490    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:51.449501    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:51.454259    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:51.454269    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:51.468173    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:51.468185    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:51.481181    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:51.481193    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:51.499882    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:51.499894    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:51.515403    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:51.515413    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:51.527400    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:51.527415    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:51.549392    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:51.549399    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:51.584630    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:51.584643    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:51.598994    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:51.599005    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:51.612208    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:51.612218    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:51.623603    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:51.623614    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:51.635528    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:51.635539    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:51.653216    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:51.653228    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:54.167273    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:59.169886    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:59.169964    3891 kubeadm.go:597] duration metric: took 4m3.878274542s to restartPrimaryControlPlane
	W0729 04:17:59.170032    3891 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 04:17:59.170061    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 04:18:00.116435    3891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 04:18:00.121107    3891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:18:00.124086    3891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:18:00.126656    3891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:18:00.126662    3891 kubeadm.go:157] found existing configuration files:
	
	I0729 04:18:00.126683    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf
	I0729 04:18:00.129909    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:18:00.129933    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:18:00.133187    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf
	I0729 04:18:00.136249    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:18:00.136275    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:18:00.138882    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf
	I0729 04:18:00.141439    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:18:00.141464    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:18:00.144495    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf
	I0729 04:18:00.146913    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:18:00.146934    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:18:00.149566    3891 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 04:18:00.167669    3891 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 04:18:00.167710    3891 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 04:18:00.216199    3891 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 04:18:00.216286    3891 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 04:18:00.216346    3891 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 04:18:00.267166    3891 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 04:18:00.271376    3891 out.go:204]   - Generating certificates and keys ...
	I0729 04:18:00.271416    3891 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 04:18:00.271449    3891 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 04:18:00.271492    3891 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 04:18:00.271528    3891 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 04:18:00.271568    3891 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 04:18:00.271600    3891 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 04:18:00.271645    3891 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 04:18:00.271689    3891 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 04:18:00.271732    3891 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 04:18:00.271777    3891 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 04:18:00.271815    3891 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 04:18:00.271847    3891 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 04:18:00.381507    3891 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 04:18:00.465686    3891 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 04:18:00.579209    3891 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 04:18:00.624648    3891 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 04:18:00.658190    3891 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 04:18:00.658893    3891 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 04:18:00.658919    3891 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 04:18:00.722960    3891 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 04:18:00.731104    3891 out.go:204]   - Booting up control plane ...
	I0729 04:18:00.731160    3891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 04:18:00.731199    3891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 04:18:00.731267    3891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 04:18:00.731313    3891 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 04:18:00.731395    3891 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 04:18:05.227794    3891 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502513 seconds
	I0729 04:18:05.227848    3891 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 04:18:05.232529    3891 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 04:18:05.760246    3891 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 04:18:05.760708    3891 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-033000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 04:18:06.265805    3891 kubeadm.go:310] [bootstrap-token] Using token: qire2r.ix7rwxajfxrew1y5
	I0729 04:18:06.271670    3891 out.go:204]   - Configuring RBAC rules ...
	I0729 04:18:06.271726    3891 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 04:18:06.271772    3891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 04:18:06.273734    3891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 04:18:06.275578    3891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 04:18:06.276631    3891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 04:18:06.277613    3891 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 04:18:06.280649    3891 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 04:18:06.456679    3891 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 04:18:06.670479    3891 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 04:18:06.670905    3891 kubeadm.go:310] 
	I0729 04:18:06.670938    3891 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 04:18:06.670944    3891 kubeadm.go:310] 
	I0729 04:18:06.670989    3891 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 04:18:06.670991    3891 kubeadm.go:310] 
	I0729 04:18:06.671003    3891 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 04:18:06.671030    3891 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 04:18:06.671076    3891 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 04:18:06.671083    3891 kubeadm.go:310] 
	I0729 04:18:06.671109    3891 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 04:18:06.671112    3891 kubeadm.go:310] 
	I0729 04:18:06.671141    3891 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 04:18:06.671145    3891 kubeadm.go:310] 
	I0729 04:18:06.671195    3891 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 04:18:06.671235    3891 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 04:18:06.671273    3891 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 04:18:06.671276    3891 kubeadm.go:310] 
	I0729 04:18:06.671354    3891 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 04:18:06.671408    3891 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 04:18:06.671413    3891 kubeadm.go:310] 
	I0729 04:18:06.671532    3891 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qire2r.ix7rwxajfxrew1y5 \
	I0729 04:18:06.671591    3891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e5aa2d5aa27d88407c50ef5c55a2dae7e3993515072a6e61b6476ae55fad38d6 \
	I0729 04:18:06.671605    3891 kubeadm.go:310] 	--control-plane 
	I0729 04:18:06.671613    3891 kubeadm.go:310] 
	I0729 04:18:06.671656    3891 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 04:18:06.671661    3891 kubeadm.go:310] 
	I0729 04:18:06.671701    3891 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qire2r.ix7rwxajfxrew1y5 \
	I0729 04:18:06.671755    3891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e5aa2d5aa27d88407c50ef5c55a2dae7e3993515072a6e61b6476ae55fad38d6 
	I0729 04:18:06.671816    3891 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 04:18:06.671822    3891 cni.go:84] Creating CNI manager for ""
	I0729 04:18:06.671830    3891 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:18:06.675924    3891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 04:18:06.684012    3891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 04:18:06.687180    3891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 04:18:06.691776    3891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 04:18:06.691820    3891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 04:18:06.691821    3891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-033000 minikube.k8s.io/updated_at=2024_07_29T04_18_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=running-upgrade-033000 minikube.k8s.io/primary=true
	I0729 04:18:06.739200    3891 ops.go:34] apiserver oom_adj: -16
	I0729 04:18:06.739330    3891 kubeadm.go:1113] duration metric: took 47.548167ms to wait for elevateKubeSystemPrivileges
	I0729 04:18:06.739344    3891 kubeadm.go:394] duration metric: took 4m11.461743625s to StartCluster
	I0729 04:18:06.739354    3891 settings.go:142] acquiring lock: {Name:mkb57b03ccb64deee52152ed8ac01af4d9e1ee07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:18:06.739446    3891 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:18:06.739811    3891 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/kubeconfig: {Name:mkc1463454d977493e341af62af023d087f8e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:18:06.740015    3891 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:18:06.740046    3891 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 04:18:06.740095    3891 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-033000"
	I0729 04:18:06.740108    3891 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-033000"
	I0729 04:18:06.740109    3891 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-033000"
	I0729 04:18:06.740120    3891 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-033000"
	W0729 04:18:06.740111    3891 addons.go:243] addon storage-provisioner should already be in state true
	I0729 04:18:06.740155    3891 host.go:66] Checking if "running-upgrade-033000" exists ...
	I0729 04:18:06.740400    3891 config.go:182] Loaded profile config "running-upgrade-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:18:06.741124    3891 kapi.go:59] client config for running-upgrade-033000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/client.key", CAFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1017c0080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:18:06.741234    3891 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-033000"
	W0729 04:18:06.741239    3891 addons.go:243] addon default-storageclass should already be in state true
	I0729 04:18:06.741253    3891 host.go:66] Checking if "running-upgrade-033000" exists ...
	I0729 04:18:06.743871    3891 out.go:177] * Verifying Kubernetes components...
	I0729 04:18:06.744174    3891 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 04:18:06.748090    3891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 04:18:06.748100    3891 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/running-upgrade-033000/id_rsa Username:docker}
	I0729 04:18:06.751874    3891 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:18:06.755941    3891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:18:06.759978    3891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:18:06.759983    3891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 04:18:06.759989    3891 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/running-upgrade-033000/id_rsa Username:docker}
	I0729 04:18:06.835426    3891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:18:06.840669    3891 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:18:06.840718    3891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:18:06.844596    3891 api_server.go:72] duration metric: took 104.572833ms to wait for apiserver process to appear ...
	I0729 04:18:06.844604    3891 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:18:06.844611    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:06.850040    3891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 04:18:06.873291    3891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:18:11.844640    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:11.844666    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:16.846361    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:16.846393    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:21.846519    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:21.846546    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:26.846767    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:26.846840    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:31.847570    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:31.847631    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:36.848141    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:36.848166    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 04:18:37.177478    3891 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 04:18:37.182568    3891 out.go:177] * Enabled addons: storage-provisioner
	I0729 04:18:37.191829    3891 addons.go:510] duration metric: took 30.452773584s for enable addons: enabled=[storage-provisioner]
	I0729 04:18:41.849129    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:41.849174    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:46.850288    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:46.850333    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:51.851115    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:51.851132    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:56.852164    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:56.852192    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:01.854054    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:01.854075    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:06.856119    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:06.856244    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:06.877349    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:06.877424    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:06.887735    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:06.887812    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:06.898543    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:06.898613    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:06.909211    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:06.909282    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:06.920045    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:06.920105    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:06.930733    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:06.930800    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:06.945131    3891 logs.go:276] 0 containers: []
	W0729 04:19:06.945150    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:06.945208    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:06.955447    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:06.955462    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:06.955467    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:06.970179    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:06.970189    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:06.992316    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:06.992327    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:07.017457    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:07.017465    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:07.032136    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:07.032146    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:07.045868    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:07.045878    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:07.058288    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:07.058299    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:07.069845    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:07.069855    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:07.081730    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:07.081741    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:07.116868    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:07.116876    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:07.121260    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:07.121267    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:07.159419    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:07.159430    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:07.171203    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:07.171213    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:09.685218    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:14.687396    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:14.687580    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:14.699358    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:14.699435    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:14.709367    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:14.709442    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:14.719768    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:14.719837    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:14.731234    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:14.731305    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:14.747610    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:14.747682    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:14.757996    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:14.758062    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:14.768370    3891 logs.go:276] 0 containers: []
	W0729 04:19:14.768388    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:14.768450    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:14.779228    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:14.779244    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:14.779250    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:14.790367    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:14.790378    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:14.804768    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:14.804779    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:14.823032    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:14.823042    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:14.834132    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:14.834142    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:14.848383    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:14.848393    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:14.859648    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:14.859658    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:14.897004    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:14.897015    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:14.911768    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:14.911780    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:14.924230    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:14.924240    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:14.950188    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:14.950196    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:14.966470    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:14.966480    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:15.001698    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:15.001706    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:17.508352    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:22.510500    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:22.510677    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:22.523280    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:22.523361    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:22.534355    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:22.534429    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:22.545001    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:22.545076    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:22.556922    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:22.556991    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:22.567224    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:22.567297    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:22.577565    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:22.577631    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:22.588024    3891 logs.go:276] 0 containers: []
	W0729 04:19:22.588037    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:22.588098    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:22.598689    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:22.598706    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:22.598712    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:22.603159    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:22.603168    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:22.641121    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:22.641134    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:22.656642    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:22.656653    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:22.671760    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:22.671771    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:22.691141    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:22.691153    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:22.706446    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:22.706460    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:22.718434    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:22.718445    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:22.735837    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:22.735846    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:22.747090    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:22.747100    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:22.770759    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:22.770768    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:22.804797    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:22.804809    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:22.819821    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:22.819831    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:25.333433    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:30.336049    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:30.336346    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:30.365256    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:30.365392    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:30.395537    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:30.395621    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:30.408530    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:30.408598    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:30.420099    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:30.420172    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:30.431534    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:30.431605    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:30.443052    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:30.443120    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:30.454181    3891 logs.go:276] 0 containers: []
	W0729 04:19:30.454197    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:30.454261    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:30.465218    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:30.465235    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:30.465240    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:30.479902    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:30.479912    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:30.503806    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:30.503816    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:30.539187    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:30.539199    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:30.556281    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:30.556292    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:30.567863    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:30.567874    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:30.579405    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:30.579415    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:30.597501    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:30.597510    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:30.609168    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:30.609178    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:30.620822    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:30.620832    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:30.656750    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:30.656768    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:30.661462    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:30.661469    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:30.675980    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:30.675990    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:33.189831    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:38.191995    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:38.192180    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:38.207473    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:38.207552    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:38.220560    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:38.220634    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:38.232632    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:38.232704    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:38.243058    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:38.243131    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:38.257623    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:38.257706    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:38.268371    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:38.268427    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:38.278503    3891 logs.go:276] 0 containers: []
	W0729 04:19:38.278515    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:38.278571    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:38.288757    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:38.288772    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:38.288778    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:38.300192    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:38.300206    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:38.314606    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:38.314614    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:38.332495    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:38.332506    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:38.344510    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:38.344520    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:38.356130    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:38.356140    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:38.368211    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:38.368224    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:38.385594    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:38.385605    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:38.420694    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:38.420703    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:38.425228    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:38.425235    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:38.460631    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:38.460645    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:38.475985    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:38.475995    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:38.501154    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:38.501165    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:41.014656    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:46.015683    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:46.015799    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:46.030102    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:46.030172    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:46.042335    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:46.042406    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:46.052976    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:46.053043    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:46.064069    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:46.064132    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:46.074577    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:46.074648    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:46.084920    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:46.084991    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:46.095103    3891 logs.go:276] 0 containers: []
	W0729 04:19:46.095117    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:46.095177    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:46.106249    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:46.106267    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:46.106272    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:46.117675    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:46.117685    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:46.129382    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:46.129392    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:46.143966    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:46.143976    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:46.158714    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:46.158724    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:46.183975    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:46.183983    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:46.196284    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:46.196295    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:46.229775    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:46.229783    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:46.233735    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:46.233744    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:46.247468    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:46.247478    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:46.264902    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:46.264914    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:46.276202    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:46.276211    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:46.310588    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:46.310602    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:48.827888    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:53.830660    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:53.831344    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:53.867600    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:53.867740    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:53.888720    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:53.888820    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:53.907877    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:53.907947    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:53.920077    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:53.920151    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:53.937330    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:53.937396    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:53.947961    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:53.948029    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:53.958165    3891 logs.go:276] 0 containers: []
	W0729 04:19:53.958174    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:53.958227    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:53.968100    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:53.968116    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:53.968121    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:54.004515    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:54.004526    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:54.018980    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:54.018991    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:54.031864    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:54.031876    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:54.047733    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:54.047746    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:54.060624    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:54.060634    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:54.078662    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:54.078673    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:54.089859    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:54.089869    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:54.112672    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:54.112679    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:54.124633    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:54.124644    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:54.159861    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:54.159868    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:54.164859    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:54.164868    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:54.178833    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:54.178843    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:56.692049    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:01.694179    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:01.694352    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:01.709779    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:01.709848    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:01.720347    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:01.720418    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:01.735131    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:20:01.735203    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:01.746202    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:01.746266    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:01.756894    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:01.756959    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:01.768568    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:01.768638    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:01.778638    3891 logs.go:276] 0 containers: []
	W0729 04:20:01.778652    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:01.778710    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:01.789361    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:01.789375    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:01.789379    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:01.801026    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:01.801035    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:01.824163    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:01.824171    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:01.835454    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:01.835467    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:01.852017    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:01.852027    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:01.887998    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:01.888012    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:01.908570    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:01.908580    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:01.922532    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:01.922542    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:01.934505    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:01.934516    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:01.949625    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:01.949635    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:01.961360    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:01.961373    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:01.996455    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:01.996465    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:02.000869    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:02.000879    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:04.514193    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:09.516403    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:09.516555    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:09.529326    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:09.529402    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:09.540203    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:09.540277    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:09.550604    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:20:09.550674    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:09.561705    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:09.561775    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:09.572437    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:09.572506    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:09.582958    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:09.583025    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:09.593631    3891 logs.go:276] 0 containers: []
	W0729 04:20:09.593643    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:09.593699    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:09.604085    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:09.604100    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:09.604105    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:09.618335    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:09.618346    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:09.634873    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:09.634884    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:09.646832    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:09.646844    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:09.659444    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:09.659455    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:09.682848    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:09.682859    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:09.694189    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:09.694199    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:09.712707    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:09.712717    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:09.746411    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:09.746419    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:09.750850    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:09.750860    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:09.821181    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:09.821196    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:09.835232    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:09.835242    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:09.847943    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:09.847956    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:12.364722    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:17.365683    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:17.365886    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:17.382795    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:17.382882    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:17.396009    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:17.396077    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:17.407275    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:20:17.407341    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:17.417601    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:17.417666    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:17.428090    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:17.428163    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:17.438992    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:17.439055    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:17.449126    3891 logs.go:276] 0 containers: []
	W0729 04:20:17.449137    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:17.449197    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:17.459589    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:17.459603    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:17.459610    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:17.474343    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:17.474353    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:17.488395    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:17.488407    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:17.503042    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:17.503051    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:17.521020    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:17.521032    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:17.556049    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:17.556056    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:17.560403    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:17.560412    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:17.597079    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:17.597092    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:17.609093    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:17.609104    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:17.620390    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:17.620405    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:17.632577    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:17.632588    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:17.652887    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:17.652901    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:17.679629    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:17.679647    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:20.195461    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:25.197767    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:25.198141    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:25.232438    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:25.232569    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:25.251844    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:25.251940    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:25.266567    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:20:25.266642    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:25.278770    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:25.278841    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:25.289628    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:25.289701    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:25.305534    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:25.305599    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:25.316777    3891 logs.go:276] 0 containers: []
	W0729 04:20:25.316790    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:25.316848    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:25.330060    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:25.330076    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:25.330083    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:25.369024    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:20:25.369036    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:20:25.380406    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:20:25.380418    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:20:25.391683    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:25.391694    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:25.416596    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:25.416606    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:25.450972    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:25.450979    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:25.463484    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:25.463494    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:25.475335    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:25.475343    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:25.493015    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:25.493031    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:25.505099    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:25.505109    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:25.509285    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:25.509293    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:25.523314    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:25.523326    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:25.537358    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:25.537367    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:25.555149    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:25.555160    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:25.567025    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:25.567036    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:28.080517    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:33.082668    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:33.082918    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:33.108201    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:33.108314    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:33.126087    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:33.126167    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:33.139370    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:20:33.139460    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:33.150597    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:33.150673    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:33.161244    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:33.161329    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:33.171873    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:33.171945    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:33.182810    3891 logs.go:276] 0 containers: []
	W0729 04:20:33.182820    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:33.182879    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:33.193389    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:33.193407    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:33.193414    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:33.207098    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:20:33.207108    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:20:33.218768    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:20:33.218777    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:20:33.230316    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:33.230327    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:33.247708    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:33.247716    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:33.272888    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:33.272896    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:33.307349    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:33.307356    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:33.321971    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:33.321984    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:33.334285    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:33.334299    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:33.349057    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:33.349068    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:33.364199    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:33.364212    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:33.368773    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:33.368784    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:33.381432    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:33.381444    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:33.420835    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:33.420848    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:33.438508    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:33.438522    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:35.951958    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:40.953103    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:40.953298    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:40.972544    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:40.972628    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:40.985980    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:40.986047    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:40.997493    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:20:40.997573    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:41.008848    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:41.008921    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:41.019134    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:41.019203    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:41.029947    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:41.030013    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:41.040203    3891 logs.go:276] 0 containers: []
	W0729 04:20:41.040215    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:41.040273    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:41.050998    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:41.051013    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:41.051019    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:41.065315    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:41.065325    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:41.076730    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:41.076740    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:41.099995    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:41.100004    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:41.111425    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:41.111436    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:41.125142    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:41.125151    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:41.136926    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:41.136934    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:41.172141    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:20:41.172150    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:20:41.183879    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:41.183892    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:41.195354    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:41.195366    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:41.209837    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:41.209849    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:41.227607    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:41.227618    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:41.232176    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:41.232186    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:41.268036    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:20:41.268046    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:20:41.279452    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:41.279464    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:43.790929    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:48.793206    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:48.793495    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:48.823345    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:48.823470    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:48.842272    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:48.842361    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:48.855953    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:20:48.856031    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:48.871989    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:48.872055    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:48.882841    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:48.882918    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:48.894180    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:48.894250    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:48.904761    3891 logs.go:276] 0 containers: []
	W0729 04:20:48.904774    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:48.904833    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:48.915436    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:48.915453    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:48.915459    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:48.951481    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:48.951489    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:48.965668    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:20:48.965677    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:20:48.976983    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:48.976997    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:48.994107    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:20:48.994117    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:20:49.005416    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:49.005430    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:49.020065    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:49.020076    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:49.033105    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:49.033114    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:49.048050    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:49.048061    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:49.052471    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:49.052481    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:49.075727    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:49.075738    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:49.086930    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:49.086941    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:49.121878    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:49.121891    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:49.135361    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:49.135372    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:49.147201    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:49.147215    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:51.660812    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:56.662930    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:56.663084    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:56.676994    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:56.677071    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:56.688264    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:56.688333    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:56.698760    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:20:56.698823    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:56.708820    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:56.708895    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:56.719734    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:56.719803    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:56.730240    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:56.730304    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:56.745703    3891 logs.go:276] 0 containers: []
	W0729 04:20:56.745718    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:56.745781    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:56.756680    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:56.756696    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:20:56.756702    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:20:56.772185    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:56.772196    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:56.788171    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:56.788181    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:56.805679    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:56.805690    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:56.839103    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:56.839112    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:56.851256    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:56.851267    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:56.862843    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:56.862855    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:56.898289    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:20:56.898299    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:20:56.918358    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:56.918369    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:56.932846    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:56.932859    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:56.947876    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:56.947888    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:56.952588    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:56.952598    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:56.967872    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:56.967888    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:56.982387    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:56.982399    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:56.995087    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:56.995096    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:59.522528    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:04.523617    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:04.523886    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:04.553249    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:04.553372    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:04.569867    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:04.569956    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:04.583659    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:04.583731    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:04.594704    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:04.594774    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:04.606934    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:04.606999    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:04.619692    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:04.619759    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:04.630356    3891 logs.go:276] 0 containers: []
	W0729 04:21:04.630370    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:04.630431    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:04.641490    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:04.641508    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:04.641514    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:04.658040    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:04.658051    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:04.673358    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:04.673368    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:04.684876    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:04.684889    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:04.701653    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:04.701668    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:04.712962    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:04.712976    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:04.725383    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:04.725392    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:04.740366    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:04.740378    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:04.745118    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:04.745125    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:04.777693    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:04.777707    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:04.798870    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:04.798882    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:04.810832    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:04.810843    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:04.834563    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:04.834571    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:04.867403    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:04.867413    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:04.902565    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:04.902576    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:07.418916    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:12.421249    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:12.421496    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:12.445221    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:12.445327    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:12.461345    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:12.461432    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:12.474284    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:12.474371    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:12.485066    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:12.485137    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:12.495467    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:12.495532    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:12.506453    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:12.506521    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:12.516962    3891 logs.go:276] 0 containers: []
	W0729 04:21:12.516976    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:12.517032    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:12.527726    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:12.527742    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:12.527748    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:12.539379    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:12.539389    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:12.558881    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:12.558890    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:12.594351    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:12.594362    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:12.606625    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:12.606635    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:12.619449    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:12.619459    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:12.656372    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:12.656401    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:12.671210    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:12.671220    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:12.682572    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:12.682584    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:12.694819    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:12.694831    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:12.712876    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:12.712886    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:12.736692    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:12.736701    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:12.741645    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:12.741652    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:12.753103    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:12.753111    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:12.764385    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:12.764394    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:15.281101    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:20.283592    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:20.284042    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:20.316956    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:20.317091    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:20.336835    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:20.336924    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:20.351929    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:20.352011    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:20.363580    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:20.363656    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:20.374122    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:20.374188    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:20.385637    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:20.385705    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:20.396429    3891 logs.go:276] 0 containers: []
	W0729 04:21:20.396440    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:20.396505    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:20.406809    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:20.406828    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:20.406833    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:20.427446    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:20.427456    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:20.445106    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:20.445118    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:20.469777    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:20.469784    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:20.490572    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:20.490582    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:20.505245    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:20.505255    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:20.518424    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:20.518438    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:20.530136    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:20.530150    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:20.571340    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:20.571355    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:20.583324    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:20.583334    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:20.587732    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:20.587738    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:20.599657    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:20.599668    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:20.611968    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:20.611977    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:20.626925    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:20.626938    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:20.638910    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:20.638920    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:23.175973    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:28.178207    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:28.178418    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:28.201759    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:28.201877    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:28.223658    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:28.223737    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:28.236378    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:28.236450    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:28.247102    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:28.247163    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:28.257154    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:28.257212    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:28.267638    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:28.267708    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:28.277861    3891 logs.go:276] 0 containers: []
	W0729 04:21:28.277871    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:28.277923    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:28.288261    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:28.288279    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:28.288284    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:28.300246    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:28.300261    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:28.319042    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:28.319053    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:28.336264    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:28.336277    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:28.347923    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:28.347935    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:28.372941    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:28.372952    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:28.384796    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:28.384810    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:28.389725    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:28.389735    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:28.433302    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:28.433315    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:28.445084    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:28.445098    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:28.461608    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:28.461620    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:28.475839    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:28.475849    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:28.487175    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:28.487187    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:28.520141    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:28.520149    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:28.532710    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:28.532718    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:31.045821    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:36.047923    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:36.048083    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:36.058941    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:36.059003    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:36.079408    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:36.079479    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:36.090118    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:36.090187    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:36.101208    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:36.101278    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:36.114617    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:36.114683    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:36.125278    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:36.125338    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:36.135392    3891 logs.go:276] 0 containers: []
	W0729 04:21:36.135406    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:36.135468    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:36.145966    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:36.145983    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:36.145988    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:36.160829    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:36.160842    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:36.172138    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:36.172151    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:36.190076    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:36.190090    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:36.201835    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:36.201848    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:36.206086    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:36.206092    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:36.241439    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:36.241449    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:36.253779    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:36.253790    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:36.265748    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:36.265758    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:36.276975    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:36.276987    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:36.310471    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:36.310479    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:36.327245    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:36.327254    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:36.340908    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:36.340920    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:36.358170    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:36.358183    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:36.371234    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:36.371245    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:38.898500    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:43.899023    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:43.899170    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:43.909727    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:43.909802    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:43.920628    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:43.920698    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:43.931242    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:43.931317    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:43.942167    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:43.942229    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:43.952324    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:43.952383    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:43.962909    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:43.962998    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:43.973547    3891 logs.go:276] 0 containers: []
	W0729 04:21:43.973558    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:43.973619    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:43.984384    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:43.984401    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:43.984406    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:44.019946    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:44.019954    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:44.031762    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:44.031776    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:44.036306    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:44.036315    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:44.050551    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:44.050563    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:44.066736    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:44.066747    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:44.091247    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:44.091254    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:44.104122    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:44.104132    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:44.115864    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:44.115874    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:44.128138    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:44.128148    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:44.142584    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:44.142594    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:44.178815    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:44.178827    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:44.191086    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:44.191098    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:44.203336    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:44.203347    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:44.220127    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:44.220137    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:46.733823    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:51.735886    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:51.735999    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:51.752780    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:51.752869    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:51.766131    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:51.766207    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:51.776948    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:51.777026    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:51.790612    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:51.790685    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:51.801639    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:51.801710    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:51.812549    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:51.812624    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:51.825398    3891 logs.go:276] 0 containers: []
	W0729 04:21:51.825410    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:51.825478    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:51.836514    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:51.836533    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:51.836538    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:51.849938    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:51.849952    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:51.862911    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:51.862924    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:51.876429    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:51.876442    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:51.913488    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:51.913508    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:51.929974    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:51.929986    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:51.941966    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:51.941980    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:51.962244    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:51.962258    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:51.986870    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:51.986880    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:51.991068    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:51.991074    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:52.002297    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:52.002306    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:52.017049    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:52.017059    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:52.029693    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:52.029703    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:52.064477    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:52.064488    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:52.078547    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:52.078566    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:54.591689    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:59.593946    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:59.594362    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:59.629282    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:59.629422    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:59.650698    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:59.650796    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:59.665955    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:59.666039    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:59.678948    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:59.679017    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:59.689799    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:59.689866    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:59.707610    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:59.707685    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:59.718361    3891 logs.go:276] 0 containers: []
	W0729 04:21:59.718372    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:59.718424    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:59.730401    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:59.730417    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:59.730423    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:59.742734    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:59.742746    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:59.759025    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:59.759036    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:59.784158    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:59.784169    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:59.789071    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:59.789078    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:59.800762    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:59.800777    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:59.812430    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:59.812440    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:59.830251    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:59.830264    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:59.866959    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:59.866972    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:59.885841    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:59.885852    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:59.898242    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:59.898253    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:59.909687    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:59.909697    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:59.921520    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:59.921530    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:59.936229    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:59.936239    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:59.948283    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:59.948294    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:22:02.484811    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:07.486931    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:07.491550    3891 out.go:177] 
	W0729 04:22:07.496419    3891 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 04:22:07.496428    3891 out.go:239] * 
	* 
	W0729 04:22:07.497133    3891 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:22:07.508442    3891 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-033000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-29 04:22:07.605258 -0700 PDT m=+2866.864422876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-033000 -n running-upgrade-033000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-033000 -n running-upgrade-033000: exit status 2 (15.59283325s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-033000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-475000          | force-systemd-flag-475000 | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-799000              | force-systemd-env-799000  | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-799000           | force-systemd-env-799000  | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT | 29 Jul 24 04:12 PDT |
	| start   | -p docker-flags-470000                | docker-flags-470000       | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-475000             | force-systemd-flag-475000 | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-475000          | force-systemd-flag-475000 | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT | 29 Jul 24 04:12 PDT |
	| start   | -p cert-expiration-099000             | cert-expiration-099000    | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-470000 ssh               | docker-flags-470000       | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-470000 ssh               | docker-flags-470000       | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-470000                | docker-flags-470000       | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT | 29 Jul 24 04:12 PDT |
	| start   | -p cert-options-582000                | cert-options-582000       | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-582000 ssh               | cert-options-582000       | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-582000 -- sudo        | cert-options-582000       | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-582000                | cert-options-582000       | jenkins | v1.33.1 | 29 Jul 24 04:12 PDT | 29 Jul 24 04:12 PDT |
	| start   | -p running-upgrade-033000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 04:12 PDT | 29 Jul 24 04:13 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-033000             | running-upgrade-033000    | jenkins | v1.33.1 | 29 Jul 24 04:13 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-099000             | cert-expiration-099000    | jenkins | v1.33.1 | 29 Jul 24 04:15 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-099000             | cert-expiration-099000    | jenkins | v1.33.1 | 29 Jul 24 04:15 PDT | 29 Jul 24 04:15 PDT |
	| start   | -p kubernetes-upgrade-325000          | kubernetes-upgrade-325000 | jenkins | v1.33.1 | 29 Jul 24 04:15 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-325000          | kubernetes-upgrade-325000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| start   | -p kubernetes-upgrade-325000          | kubernetes-upgrade-325000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-325000          | kubernetes-upgrade-325000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| start   | -p stopped-upgrade-338000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-338000 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| start   | -p stopped-upgrade-338000             | stopped-upgrade-338000    | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 04:16:58
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 04:16:58.640090    4028 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:16:58.640245    4028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:58.640249    4028 out.go:304] Setting ErrFile to fd 2...
	I0729 04:16:58.640251    4028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:58.640419    4028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:16:58.641492    4028 out.go:298] Setting JSON to false
	I0729 04:16:58.659453    4028 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2781,"bootTime":1722249037,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:16:58.659534    4028 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:16:58.675134    4028 out.go:177] * [stopped-upgrade-338000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:16:58.683123    4028 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:16:58.683145    4028 notify.go:220] Checking for updates...
	I0729 04:16:58.691080    4028 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:16:58.694093    4028 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:16:58.697135    4028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:16:58.700077    4028 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:16:58.703082    4028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:16:58.706425    4028 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:16:58.710036    4028 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 04:16:58.713078    4028 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:16:58.717111    4028 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:16:58.724095    4028 start.go:297] selected driver: qemu2
	I0729 04:16:58.724102    4028 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:16:58.724170    4028 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:16:58.727046    4028 cni.go:84] Creating CNI manager for ""
	I0729 04:16:58.727070    4028 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:16:58.727095    4028 start.go:340] cluster config:
	{Name:stopped-upgrade-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:16:58.727153    4028 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:16:58.735097    4028 out.go:177] * Starting "stopped-upgrade-338000" primary control-plane node in "stopped-upgrade-338000" cluster
	I0729 04:16:58.739101    4028 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:16:58.739119    4028 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 04:16:58.739127    4028 cache.go:56] Caching tarball of preloaded images
	I0729 04:16:58.739197    4028 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:16:58.739208    4028 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 04:16:58.739262    4028 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/config.json ...
	I0729 04:16:58.739682    4028 start.go:360] acquireMachinesLock for stopped-upgrade-338000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:16:58.739712    4028 start.go:364] duration metric: took 23.208µs to acquireMachinesLock for "stopped-upgrade-338000"
	I0729 04:16:58.739722    4028 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:16:58.739727    4028 fix.go:54] fixHost starting: 
	I0729 04:16:58.739836    4028 fix.go:112] recreateIfNeeded on stopped-upgrade-338000: state=Stopped err=<nil>
	W0729 04:16:58.739844    4028 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:16:58.748073    4028 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-338000" ...
	I0729 04:16:58.974856    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:16:58.752102    4028 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:16:58.752171    4028 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50482-:22,hostfwd=tcp::50483-:2376,hostname=stopped-upgrade-338000 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/disk.qcow2
	I0729 04:16:58.798497    4028 main.go:141] libmachine: STDOUT: 
	I0729 04:16:58.798530    4028 main.go:141] libmachine: STDERR: 
	I0729 04:16:58.798535    4028 main.go:141] libmachine: Waiting for VM to start (ssh -p 50482 docker@127.0.0.1)...
	I0729 04:17:03.976971    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:03.977286    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:04.016439    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:04.016591    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:04.042861    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:04.042963    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:04.057101    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:04.057185    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:04.069383    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:04.069465    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:04.080304    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:04.080372    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:04.090989    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:04.091072    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:04.101841    3891 logs.go:276] 0 containers: []
	W0729 04:17:04.101855    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:04.101929    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:04.115254    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:04.115283    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:04.115289    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:04.153809    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:04.153820    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:04.169995    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:04.170007    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:04.192386    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:04.192395    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:04.203836    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:04.203850    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:04.226932    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:04.226940    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:04.238686    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:04.238695    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:04.253173    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:04.253181    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:04.264159    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:04.264169    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:04.275504    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:04.275517    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:04.287578    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:04.287590    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:04.302481    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:04.302491    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:04.314353    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:04.314365    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:04.319314    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:04.319321    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:04.355604    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:04.355616    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:04.375138    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:04.375149    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:06.891169    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:11.893380    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:11.893760    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:11.926394    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:11.926534    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:11.954600    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:11.954694    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:11.967675    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:11.967751    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:11.982406    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:11.982478    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:11.992988    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:11.993058    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:12.003440    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:12.003511    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:12.013802    3891 logs.go:276] 0 containers: []
	W0729 04:17:12.013812    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:12.013874    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:12.024698    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:12.024717    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:12.024723    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:12.038853    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:12.038865    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:12.050444    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:12.050458    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:12.062016    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:12.062030    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:12.083939    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:12.083950    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:12.120589    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:12.120598    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:12.142321    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:12.142334    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:12.161321    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:12.161333    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:12.173103    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:12.173114    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:12.196197    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:12.196207    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:12.207619    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:12.207634    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:12.211888    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:12.211895    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:12.222984    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:12.222995    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:12.235124    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:12.235135    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:12.271352    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:12.271363    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:12.285571    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:12.285584    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:14.799706    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:17.803140    4028 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/config.json ...
	I0729 04:17:17.803873    4028 machine.go:94] provisionDockerMachine start ...
	I0729 04:17:17.804034    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:17.804534    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:17.804548    4028 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 04:17:17.886669    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 04:17:17.886700    4028 buildroot.go:166] provisioning hostname "stopped-upgrade-338000"
	I0729 04:17:17.886793    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:17.886990    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:17.887000    4028 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-338000 && echo "stopped-upgrade-338000" | sudo tee /etc/hostname
	I0729 04:17:17.948593    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-338000
	
	I0729 04:17:17.948656    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:17.948798    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:17.948807    4028 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-338000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-338000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-338000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 04:17:18.008233    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:17:18.008245    4028 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19336-945/.minikube CaCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19336-945/.minikube}
	I0729 04:17:18.008258    4028 buildroot.go:174] setting up certificates
	I0729 04:17:18.008263    4028 provision.go:84] configureAuth start
	I0729 04:17:18.008270    4028 provision.go:143] copyHostCerts
	I0729 04:17:18.008341    4028 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem, removing ...
	I0729 04:17:18.008346    4028 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem
	I0729 04:17:18.008470    4028 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem (1078 bytes)
	I0729 04:17:18.008665    4028 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem, removing ...
	I0729 04:17:18.008669    4028 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem
	I0729 04:17:18.008724    4028 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem (1123 bytes)
	I0729 04:17:18.008832    4028 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem, removing ...
	I0729 04:17:18.008835    4028 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem
	I0729 04:17:18.008887    4028 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem (1679 bytes)
	I0729 04:17:18.008978    4028 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-338000 san=[127.0.0.1 localhost minikube stopped-upgrade-338000]
	I0729 04:17:18.257534    4028 provision.go:177] copyRemoteCerts
	I0729 04:17:18.257589    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 04:17:18.257599    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	I0729 04:17:18.290213    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 04:17:18.297305    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 04:17:18.304071    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 04:17:18.310744    4028 provision.go:87] duration metric: took 302.485833ms to configureAuth
	I0729 04:17:18.310752    4028 buildroot.go:189] setting minikube options for container-runtime
	I0729 04:17:18.310857    4028 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:17:18.310895    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:18.310992    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:18.310999    4028 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 04:17:18.366171    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 04:17:18.366180    4028 buildroot.go:70] root file system type: tmpfs
	I0729 04:17:18.366237    4028 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 04:17:18.366302    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:18.366424    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:18.366456    4028 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 04:17:18.426474    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 04:17:18.426523    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:18.426628    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:18.426638    4028 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 04:17:18.758160    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 04:17:18.758173    4028 machine.go:97] duration metric: took 954.321458ms to provisionDockerMachine
	I0729 04:17:18.758181    4028 start.go:293] postStartSetup for "stopped-upgrade-338000" (driver="qemu2")
	I0729 04:17:18.758187    4028 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 04:17:18.758246    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 04:17:18.758254    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	I0729 04:17:18.789263    4028 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 04:17:18.790553    4028 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 04:17:18.790560    4028 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19336-945/.minikube/addons for local assets ...
	I0729 04:17:18.790663    4028 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19336-945/.minikube/files for local assets ...
	I0729 04:17:18.790785    4028 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem -> 13972.pem in /etc/ssl/certs
	I0729 04:17:18.790910    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 04:17:18.793866    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem --> /etc/ssl/certs/13972.pem (1708 bytes)
	I0729 04:17:18.800919    4028 start.go:296] duration metric: took 42.734ms for postStartSetup
	I0729 04:17:18.800933    4028 fix.go:56] duration metric: took 20.061856458s for fixHost
	I0729 04:17:18.800968    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:18.801073    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:18.801078    4028 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 04:17:18.856109    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722251839.313678712
	
	I0729 04:17:18.856118    4028 fix.go:216] guest clock: 1722251839.313678712
	I0729 04:17:18.856122    4028 fix.go:229] Guest: 2024-07-29 04:17:19.313678712 -0700 PDT Remote: 2024-07-29 04:17:18.800935 -0700 PDT m=+20.184814709 (delta=512.743712ms)
	I0729 04:17:18.856134    4028 fix.go:200] guest clock delta is within tolerance: 512.743712ms
	I0729 04:17:18.856138    4028 start.go:83] releasing machines lock for "stopped-upgrade-338000", held for 20.117072291s
	I0729 04:17:18.856200    4028 ssh_runner.go:195] Run: cat /version.json
	I0729 04:17:18.856210    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	I0729 04:17:18.856200    4028 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 04:17:18.856256    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	W0729 04:17:18.856780    4028 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50482: connect: connection refused
	I0729 04:17:18.856801    4028 retry.go:31] will retry after 211.079939ms: dial tcp [::1]:50482: connect: connection refused
	W0729 04:17:18.884000    4028 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 04:17:18.884049    4028 ssh_runner.go:195] Run: systemctl --version
	I0729 04:17:18.885706    4028 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 04:17:18.887349    4028 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 04:17:18.887379    4028 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 04:17:18.890454    4028 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 04:17:18.895162    4028 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 04:17:18.895171    4028 start.go:495] detecting cgroup driver to use...
	I0729 04:17:18.895246    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:17:18.901570    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 04:17:18.904897    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 04:17:18.907573    4028 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 04:17:18.907602    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 04:17:18.910532    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:17:18.913734    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 04:17:18.916762    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:17:18.919428    4028 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 04:17:18.922628    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 04:17:18.925930    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 04:17:18.929089    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 04:17:18.932043    4028 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 04:17:18.934667    4028 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 04:17:18.937678    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:19.002866    4028 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 04:17:19.013307    4028 start.go:495] detecting cgroup driver to use...
	I0729 04:17:19.013378    4028 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 04:17:19.018835    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:17:19.022990    4028 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 04:17:19.029441    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:17:19.034407    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:17:19.038784    4028 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 04:17:19.080765    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:17:19.085163    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:17:19.090808    4028 ssh_runner.go:195] Run: which cri-dockerd
	I0729 04:17:19.092306    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 04:17:19.095461    4028 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 04:17:19.102516    4028 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 04:17:19.169844    4028 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 04:17:19.234381    4028 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 04:17:19.234442    4028 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 04:17:19.239844    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:19.306449    4028 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:17:20.417141    4028 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.110693542s)
	I0729 04:17:20.417249    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 04:17:20.423439    4028 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 04:17:20.431127    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:17:20.437181    4028 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 04:17:20.500319    4028 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 04:17:20.571492    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:20.633609    4028 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 04:17:20.639565    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:17:20.644223    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:20.710606    4028 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 04:17:20.750588    4028 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 04:17:20.750672    4028 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 04:17:20.753673    4028 start.go:563] Will wait 60s for crictl version
	I0729 04:17:20.753724    4028 ssh_runner.go:195] Run: which crictl
	I0729 04:17:20.755267    4028 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 04:17:20.769391    4028 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 04:17:20.769459    4028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:17:20.785101    4028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:17:19.802311    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:19.802511    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:19.825842    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:19.825966    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:19.843191    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:19.843273    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:19.855235    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:19.855309    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:19.870329    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:19.870397    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:19.881120    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:19.881192    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:19.891607    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:19.891679    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:19.901494    3891 logs.go:276] 0 containers: []
	W0729 04:17:19.901509    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:19.901569    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:19.912490    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:19.912506    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:19.912512    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:19.923618    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:19.923628    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:19.946236    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:19.946243    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:19.961018    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:19.961030    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:19.972135    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:19.972147    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:19.983423    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:19.983433    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:20.000478    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:20.000487    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:20.011473    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:20.011485    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:20.048847    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:20.048858    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:20.053238    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:20.053247    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:20.087674    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:20.087686    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:20.101989    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:20.102004    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:20.113937    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:20.113950    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:20.134100    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:20.134111    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:20.148682    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:20.148693    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:20.160634    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:20.160644    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:20.805029    4028 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 04:17:20.805092    4028 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 04:17:20.806509    4028 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 04:17:20.810474    4028 kubeadm.go:883] updating cluster {Name:stopped-upgrade-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 04:17:20.810516    4028 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:17:20.810556    4028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:17:20.821027    4028 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:17:20.821036    4028 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:17:20.821085    4028 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:17:20.824149    4028 ssh_runner.go:195] Run: which lz4
	I0729 04:17:20.825443    4028 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 04:17:20.826873    4028 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 04:17:20.826883    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 04:17:21.750752    4028 docker.go:649] duration metric: took 925.365833ms to copy over tarball
	I0729 04:17:21.750813    4028 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 04:17:22.913166    4028 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162377916s)
	I0729 04:17:22.913180    4028 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 04:17:22.928614    4028 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:17:22.931398    4028 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 04:17:22.936423    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:22.992881    4028 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:17:22.673601    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:24.568753    4028 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.575903791s)
	I0729 04:17:24.568839    4028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:17:24.580765    4028 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:17:24.580773    4028 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:17:24.580780    4028 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 04:17:24.585430    4028 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:17:24.586909    4028 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:17:24.588387    4028 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:17:24.588473    4028 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:17:24.590213    4028 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:17:24.590703    4028 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:17:24.591092    4028 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:17:24.592481    4028 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:17:24.592775    4028 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:17:24.593441    4028 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:17:24.593448    4028 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:17:24.594000    4028 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:17:24.594600    4028 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:17:24.595830    4028 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:17:24.595885    4028 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:17:24.596505    4028 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:17:25.022437    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:17:25.024525    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 04:17:25.033964    4028 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 04:17:25.033996    4028 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:17:25.034050    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:17:25.037657    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:17:25.039759    4028 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 04:17:25.039777    4028 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 04:17:25.039811    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 04:17:25.049789    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 04:17:25.054473    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:17:25.057746    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 04:17:25.057823    4028 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 04:17:25.057842    4028 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:17:25.057860    4028 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 04:17:25.057872    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:17:25.073444    4028 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 04:17:25.073466    4028 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:17:25.073483    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 04:17:25.073517    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:17:25.073552    4028 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 04:17:25.073560    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 04:17:25.077904    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0729 04:17:25.084744    4028 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:17:25.084868    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:17:25.090808    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 04:17:25.090839    4028 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 04:17:25.090855    4028 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:17:25.090897    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:17:25.095646    4028 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 04:17:25.095659    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 04:17:25.109893    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 04:17:25.109989    4028 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 04:17:25.110006    4028 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:17:25.110053    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:17:25.132096    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 04:17:25.139255    4028 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 04:17:25.139286    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:17:25.139394    4028 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:17:25.151174    4028 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 04:17:25.151194    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 04:17:25.151199    4028 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 04:17:25.151215    4028 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:17:25.151255    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 04:17:25.183630    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:17:25.203159    4028 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:17:25.203171    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0729 04:17:25.224075    4028 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:17:25.224195    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:17:25.245080    4028 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 04:17:25.245107    4028 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 04:17:25.245125    4028 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:17:25.245177    4028 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:17:25.258469    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 04:17:25.258584    4028 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:17:25.259929    4028 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 04:17:25.259940    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 04:17:25.288308    4028 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:17:25.288322    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 04:17:25.523371    4028 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 04:17:25.523414    4028 cache_images.go:92] duration metric: took 942.656958ms to LoadCachedImages
	W0729 04:17:25.523460    4028 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0729 04:17:25.523469    4028 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 04:17:25.523524    4028 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-338000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 04:17:25.523590    4028 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 04:17:25.537320    4028 cni.go:84] Creating CNI manager for ""
	I0729 04:17:25.537331    4028 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:17:25.537335    4028 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 04:17:25.537344    4028 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-338000 NodeName:stopped-upgrade-338000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 04:17:25.537412    4028 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-338000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 04:17:25.537465    4028 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 04:17:25.540927    4028 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 04:17:25.540957    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 04:17:25.543708    4028 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 04:17:25.548469    4028 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 04:17:25.553282    4028 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 04:17:25.558713    4028 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 04:17:25.560110    4028 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 04:17:25.563419    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:25.629515    4028 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:17:25.636338    4028 certs.go:68] Setting up /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000 for IP: 10.0.2.15
	I0729 04:17:25.636348    4028 certs.go:194] generating shared ca certs ...
	I0729 04:17:25.636358    4028 certs.go:226] acquiring lock for ca certs: {Name:mk0965f831896eb9b1f5dee9ac66a2ece4b593d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:17:25.636533    4028 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19336-945/.minikube/ca.key
	I0729 04:17:25.636596    4028 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.key
	I0729 04:17:25.636605    4028 certs.go:256] generating profile certs ...
	I0729 04:17:25.636695    4028 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/client.key
	I0729 04:17:25.636716    4028 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key.a7dbec32
	I0729 04:17:25.636726    4028 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt.a7dbec32 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 04:17:25.707181    4028 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt.a7dbec32 ...
	I0729 04:17:25.707192    4028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt.a7dbec32: {Name:mk4f7c46013d8982827f9dd2e084af8713094999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:17:25.707490    4028 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key.a7dbec32 ...
	I0729 04:17:25.707495    4028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key.a7dbec32: {Name:mkd146571ce421c6254955e0f574c7716ca821fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:17:25.707640    4028 certs.go:381] copying /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt.a7dbec32 -> /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt
	I0729 04:17:25.707795    4028 certs.go:385] copying /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key.a7dbec32 -> /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key
	I0729 04:17:25.707952    4028 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/proxy-client.key
	I0729 04:17:25.708085    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397.pem (1338 bytes)
	W0729 04:17:25.708113    4028 certs.go:480] ignoring /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397_empty.pem, impossibly tiny 0 bytes
	I0729 04:17:25.708118    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 04:17:25.708141    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem (1078 bytes)
	I0729 04:17:25.708159    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem (1123 bytes)
	I0729 04:17:25.708178    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem (1679 bytes)
	I0729 04:17:25.708218    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem (1708 bytes)
	I0729 04:17:25.708570    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 04:17:25.715265    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 04:17:25.722080    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 04:17:25.729405    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 04:17:25.736813    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 04:17:25.743666    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 04:17:25.750527    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 04:17:25.758092    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 04:17:25.765751    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397.pem --> /usr/share/ca-certificates/1397.pem (1338 bytes)
	I0729 04:17:25.772820    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem --> /usr/share/ca-certificates/13972.pem (1708 bytes)
	I0729 04:17:25.779560    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 04:17:25.786165    4028 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 04:17:25.791595    4028 ssh_runner.go:195] Run: openssl version
	I0729 04:17:25.793372    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1397.pem && ln -fs /usr/share/ca-certificates/1397.pem /etc/ssl/certs/1397.pem"
	I0729 04:17:25.796330    4028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1397.pem
	I0729 04:17:25.797666    4028 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:42 /usr/share/ca-certificates/1397.pem
	I0729 04:17:25.797686    4028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1397.pem
	I0729 04:17:25.799502    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1397.pem /etc/ssl/certs/51391683.0"
	I0729 04:17:25.802397    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13972.pem && ln -fs /usr/share/ca-certificates/13972.pem /etc/ssl/certs/13972.pem"
	I0729 04:17:25.805804    4028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13972.pem
	I0729 04:17:25.807292    4028 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:42 /usr/share/ca-certificates/13972.pem
	I0729 04:17:25.807309    4028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13972.pem
	I0729 04:17:25.809066    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13972.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 04:17:25.811870    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 04:17:25.814668    4028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:17:25.816165    4028 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:35 /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:17:25.816185    4028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:17:25.817945    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 04:17:25.821368    4028 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 04:17:25.822794    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 04:17:25.824840    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 04:17:25.826628    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 04:17:25.828545    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 04:17:25.830297    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 04:17:25.832073    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 04:17:25.833899    4028 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:17:25.833964    4028 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:17:25.844062    4028 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 04:17:25.847011    4028 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 04:17:25.847017    4028 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 04:17:25.847038    4028 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 04:17:25.850774    4028 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:17:25.851088    4028 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-338000" does not appear in /Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:17:25.851183    4028 kubeconfig.go:62] /Users/jenkins/minikube-integration/19336-945/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-338000" cluster setting kubeconfig missing "stopped-upgrade-338000" context setting]
	I0729 04:17:25.851371    4028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/kubeconfig: {Name:mkc1463454d977493e341af62af023d087f8e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:17:25.851845    4028 kapi.go:59] client config for stopped-upgrade-338000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/client.key", CAFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043bc080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:17:25.852164    4028 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 04:17:25.854930    4028 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-338000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 04:17:25.854935    4028 kubeadm.go:1160] stopping kube-system containers ...
	I0729 04:17:25.854975    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:17:25.865473    4028 docker.go:483] Stopping containers: [cae11772d89d 4830b62c6b98 486a2b7332b3 8f2228fa6055 68f8e4539bd1 64317cceabde b56e17165644 285d228d3e90]
	I0729 04:17:25.865532    4028 ssh_runner.go:195] Run: docker stop cae11772d89d 4830b62c6b98 486a2b7332b3 8f2228fa6055 68f8e4539bd1 64317cceabde b56e17165644 285d228d3e90
	I0729 04:17:25.876013    4028 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 04:17:25.881515    4028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:17:25.884626    4028 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:17:25.884632    4028 kubeadm.go:157] found existing configuration files:
	
	I0729 04:17:25.884659    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0729 04:17:25.887336    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:17:25.887360    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:17:25.890194    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0729 04:17:25.893293    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:17:25.893318    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:17:25.896034    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0729 04:17:25.898528    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:17:25.898546    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:17:25.901684    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0729 04:17:25.904375    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:17:25.904398    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:17:25.906831    4028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:17:25.909967    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:17:25.933080    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:17:26.455118    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:17:26.574816    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:17:26.604288    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:17:26.630476    4028 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:17:26.630555    4028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:17:27.132371    4028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:17:27.632581    4028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:17:27.638009    4028 api_server.go:72] duration metric: took 1.007566584s to wait for apiserver process to appear ...
	I0729 04:17:27.638021    4028 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:17:27.638031    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:27.675632    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:27.675721    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:27.687570    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:27.687644    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:27.699644    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:27.699716    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:27.711357    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:27.711430    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:27.723299    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:27.723373    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:27.738845    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:27.738916    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:27.752525    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:27.752590    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:27.762919    3891 logs.go:276] 0 containers: []
	W0729 04:17:27.762931    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:27.762998    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:27.773225    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:27.773242    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:27.773247    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:27.788005    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:27.788016    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:27.800114    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:27.800126    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:27.818042    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:27.818054    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:27.858078    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:27.858087    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:27.873125    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:27.873136    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:27.885069    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:27.885080    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:27.896430    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:27.896447    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:27.908628    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:27.908642    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:27.932754    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:27.932764    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:27.937461    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:27.937470    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:27.959197    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:27.959213    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:27.973446    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:27.973457    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:27.985648    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:27.985661    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:28.022345    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:28.022355    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:28.034493    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:28.034505    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:30.548854    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:32.638076    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:32.638107    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:35.551081    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:35.551355    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:35.577529    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:35.577654    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:35.594002    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:35.594092    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:35.606918    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:35.606994    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:35.619040    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:35.619117    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:35.629667    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:35.629740    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:35.640341    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:35.640410    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:35.650196    3891 logs.go:276] 0 containers: []
	W0729 04:17:35.650211    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:35.650265    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:35.660295    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:35.660314    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:35.660320    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:35.679342    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:35.679353    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:35.691316    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:35.691330    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:35.703471    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:35.703486    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:35.715183    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:35.715197    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:35.719363    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:35.719369    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:35.752703    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:35.752713    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:35.767182    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:35.767195    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:35.781895    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:35.781912    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:35.793306    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:35.793319    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:35.810868    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:35.810878    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:35.825161    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:35.825172    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:35.836194    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:35.836205    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:35.847174    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:35.847183    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:35.885077    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:35.885087    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:35.909406    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:35.909414    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:37.639875    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:37.639946    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:38.422179    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:42.640417    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:42.640444    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:43.424693    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:43.424845    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:43.437270    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:43.437337    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:43.448325    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:43.448400    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:43.460108    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:43.460175    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:43.471260    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:43.471335    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:43.481541    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:43.481608    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:43.491929    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:43.491993    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:43.502050    3891 logs.go:276] 0 containers: []
	W0729 04:17:43.502062    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:43.502115    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:43.512371    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:43.512388    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:43.512394    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:43.549481    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:43.549499    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:43.569131    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:43.569144    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:43.581922    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:43.581933    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:43.595793    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:43.595803    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:43.610596    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:43.610607    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:43.622824    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:43.622838    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:43.635844    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:43.635857    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:43.659984    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:43.659995    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:43.664309    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:43.664317    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:43.675801    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:43.675813    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:43.687600    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:43.687611    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:43.705572    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:43.705582    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:43.718809    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:43.718820    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:43.755234    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:43.755245    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:43.769396    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:43.769407    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:46.296343    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:47.640719    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:47.640749    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:51.298428    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:51.298610    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:17:51.312645    3891 logs.go:276] 2 containers: [2d6d0851f546 2b705fa1d0ca]
	I0729 04:17:51.312729    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:17:51.324408    3891 logs.go:276] 2 containers: [1c93c1680863 a1bd11a4a42b]
	I0729 04:17:51.324473    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:17:51.335449    3891 logs.go:276] 1 containers: [566e808c856a]
	I0729 04:17:51.335514    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:17:51.346144    3891 logs.go:276] 2 containers: [06013c5e8a5f b4b562b1dbf8]
	I0729 04:17:51.346217    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:17:51.363407    3891 logs.go:276] 1 containers: [41a63b4e024b]
	I0729 04:17:51.363479    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:17:51.374059    3891 logs.go:276] 2 containers: [22565ef1f8a6 f4efaaa95d51]
	I0729 04:17:51.374122    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:17:51.391416    3891 logs.go:276] 0 containers: []
	W0729 04:17:51.391428    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:17:51.391492    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:17:51.401645    3891 logs.go:276] 1 containers: [8ba5c1618d21]
	I0729 04:17:51.401661    3891 logs.go:123] Gathering logs for storage-provisioner [8ba5c1618d21] ...
	I0729 04:17:51.401666    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ba5c1618d21"
	I0729 04:17:51.413543    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:17:51.413553    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:17:51.449490    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:17:51.449501    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:17:51.454259    3891 logs.go:123] Gathering logs for kube-apiserver [2d6d0851f546] ...
	I0729 04:17:51.454269    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d6d0851f546"
	I0729 04:17:51.468173    3891 logs.go:123] Gathering logs for coredns [566e808c856a] ...
	I0729 04:17:51.468185    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 566e808c856a"
	I0729 04:17:51.481181    3891 logs.go:123] Gathering logs for kube-apiserver [2b705fa1d0ca] ...
	I0729 04:17:51.481193    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b705fa1d0ca"
	I0729 04:17:51.499882    3891 logs.go:123] Gathering logs for etcd [a1bd11a4a42b] ...
	I0729 04:17:51.499894    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1bd11a4a42b"
	I0729 04:17:51.515403    3891 logs.go:123] Gathering logs for kube-controller-manager [f4efaaa95d51] ...
	I0729 04:17:51.515413    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4efaaa95d51"
	I0729 04:17:51.527400    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:17:51.527415    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:17:51.549392    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:17:51.549399    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:17:51.584630    3891 logs.go:123] Gathering logs for etcd [1c93c1680863] ...
	I0729 04:17:51.584643    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c93c1680863"
	I0729 04:17:51.598994    3891 logs.go:123] Gathering logs for kube-scheduler [06013c5e8a5f] ...
	I0729 04:17:51.599005    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06013c5e8a5f"
	I0729 04:17:51.612208    3891 logs.go:123] Gathering logs for kube-scheduler [b4b562b1dbf8] ...
	I0729 04:17:51.612218    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b562b1dbf8"
	I0729 04:17:51.623603    3891 logs.go:123] Gathering logs for kube-proxy [41a63b4e024b] ...
	I0729 04:17:51.623614    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41a63b4e024b"
	I0729 04:17:51.635528    3891 logs.go:123] Gathering logs for kube-controller-manager [22565ef1f8a6] ...
	I0729 04:17:51.635539    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22565ef1f8a6"
	I0729 04:17:51.653216    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:17:51.653228    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:17:52.641213    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:52.641267    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:54.167273    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:57.641802    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:57.641828    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:59.169886    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:59.169964    3891 kubeadm.go:597] duration metric: took 4m3.878274542s to restartPrimaryControlPlane
	W0729 04:17:59.170032    3891 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 04:17:59.170061    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 04:18:00.116435    3891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 04:18:00.121107    3891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:18:00.124086    3891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:18:00.126656    3891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:18:00.126662    3891 kubeadm.go:157] found existing configuration files:
	
	I0729 04:18:00.126683    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf
	I0729 04:18:00.129909    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:18:00.129933    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:18:00.133187    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf
	I0729 04:18:00.136249    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:18:00.136275    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:18:00.138882    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf
	I0729 04:18:00.141439    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:18:00.141464    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:18:00.144495    3891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf
	I0729 04:18:00.146913    3891 kubeadm.go:163] "https://control-plane.minikube.internal:50299" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50299 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:18:00.146934    3891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:18:00.149566    3891 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 04:18:00.167669    3891 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 04:18:00.167710    3891 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 04:18:00.216199    3891 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 04:18:00.216286    3891 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 04:18:00.216346    3891 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 04:18:00.267166    3891 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 04:18:00.271376    3891 out.go:204]   - Generating certificates and keys ...
	I0729 04:18:00.271416    3891 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 04:18:00.271449    3891 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 04:18:00.271492    3891 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 04:18:00.271528    3891 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 04:18:00.271568    3891 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 04:18:00.271600    3891 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 04:18:00.271645    3891 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 04:18:00.271689    3891 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 04:18:00.271732    3891 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 04:18:00.271777    3891 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 04:18:00.271815    3891 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 04:18:00.271847    3891 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 04:18:00.381507    3891 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 04:18:00.465686    3891 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 04:18:00.579209    3891 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 04:18:00.624648    3891 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 04:18:00.658190    3891 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 04:18:00.658893    3891 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 04:18:00.658919    3891 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 04:18:00.722960    3891 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 04:18:00.731104    3891 out.go:204]   - Booting up control plane ...
	I0729 04:18:00.731160    3891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 04:18:00.731199    3891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 04:18:00.731267    3891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 04:18:00.731313    3891 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 04:18:00.731395    3891 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 04:18:02.642597    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:02.642620    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:05.227794    3891 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502513 seconds
	I0729 04:18:05.227848    3891 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 04:18:05.232529    3891 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 04:18:05.760246    3891 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 04:18:05.760708    3891 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-033000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 04:18:06.265805    3891 kubeadm.go:310] [bootstrap-token] Using token: qire2r.ix7rwxajfxrew1y5
	I0729 04:18:06.271670    3891 out.go:204]   - Configuring RBAC rules ...
	I0729 04:18:06.271726    3891 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 04:18:06.271772    3891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 04:18:06.273734    3891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 04:18:06.275578    3891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 04:18:06.276631    3891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 04:18:06.277613    3891 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 04:18:06.280649    3891 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 04:18:06.456679    3891 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 04:18:06.670479    3891 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 04:18:06.670905    3891 kubeadm.go:310] 
	I0729 04:18:06.670938    3891 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 04:18:06.670944    3891 kubeadm.go:310] 
	I0729 04:18:06.670989    3891 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 04:18:06.670991    3891 kubeadm.go:310] 
	I0729 04:18:06.671003    3891 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 04:18:06.671030    3891 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 04:18:06.671076    3891 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 04:18:06.671083    3891 kubeadm.go:310] 
	I0729 04:18:06.671109    3891 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 04:18:06.671112    3891 kubeadm.go:310] 
	I0729 04:18:06.671141    3891 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 04:18:06.671145    3891 kubeadm.go:310] 
	I0729 04:18:06.671195    3891 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 04:18:06.671235    3891 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 04:18:06.671273    3891 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 04:18:06.671276    3891 kubeadm.go:310] 
	I0729 04:18:06.671354    3891 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 04:18:06.671408    3891 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 04:18:06.671413    3891 kubeadm.go:310] 
	I0729 04:18:06.671532    3891 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qire2r.ix7rwxajfxrew1y5 \
	I0729 04:18:06.671591    3891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e5aa2d5aa27d88407c50ef5c55a2dae7e3993515072a6e61b6476ae55fad38d6 \
	I0729 04:18:06.671605    3891 kubeadm.go:310] 	--control-plane 
	I0729 04:18:06.671613    3891 kubeadm.go:310] 
	I0729 04:18:06.671656    3891 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 04:18:06.671661    3891 kubeadm.go:310] 
	I0729 04:18:06.671701    3891 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qire2r.ix7rwxajfxrew1y5 \
	I0729 04:18:06.671755    3891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e5aa2d5aa27d88407c50ef5c55a2dae7e3993515072a6e61b6476ae55fad38d6 
	I0729 04:18:06.671816    3891 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 04:18:06.671822    3891 cni.go:84] Creating CNI manager for ""
	I0729 04:18:06.671830    3891 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:18:06.675924    3891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 04:18:06.684012    3891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 04:18:06.687180    3891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 04:18:06.691776    3891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 04:18:06.691820    3891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 04:18:06.691821    3891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-033000 minikube.k8s.io/updated_at=2024_07_29T04_18_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=running-upgrade-033000 minikube.k8s.io/primary=true
	I0729 04:18:06.739200    3891 ops.go:34] apiserver oom_adj: -16
	I0729 04:18:06.739330    3891 kubeadm.go:1113] duration metric: took 47.548167ms to wait for elevateKubeSystemPrivileges
	I0729 04:18:06.739344    3891 kubeadm.go:394] duration metric: took 4m11.461743625s to StartCluster
	I0729 04:18:06.739354    3891 settings.go:142] acquiring lock: {Name:mkb57b03ccb64deee52152ed8ac01af4d9e1ee07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:18:06.739446    3891 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:18:06.739811    3891 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/kubeconfig: {Name:mkc1463454d977493e341af62af023d087f8e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:18:06.740015    3891 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:18:06.740046    3891 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 04:18:06.740095    3891 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-033000"
	I0729 04:18:06.740108    3891 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-033000"
	I0729 04:18:06.740109    3891 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-033000"
	I0729 04:18:06.740120    3891 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-033000"
	W0729 04:18:06.740111    3891 addons.go:243] addon storage-provisioner should already be in state true
	I0729 04:18:06.740155    3891 host.go:66] Checking if "running-upgrade-033000" exists ...
	I0729 04:18:06.740400    3891 config.go:182] Loaded profile config "running-upgrade-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:18:06.741124    3891 kapi.go:59] client config for running-upgrade-033000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/running-upgrade-033000/client.key", CAFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1017c0080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:18:06.741234    3891 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-033000"
	W0729 04:18:06.741239    3891 addons.go:243] addon default-storageclass should already be in state true
	I0729 04:18:06.741253    3891 host.go:66] Checking if "running-upgrade-033000" exists ...
	I0729 04:18:06.743871    3891 out.go:177] * Verifying Kubernetes components...
	I0729 04:18:06.744174    3891 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 04:18:06.748090    3891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 04:18:06.748100    3891 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/running-upgrade-033000/id_rsa Username:docker}
	I0729 04:18:06.751874    3891 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:18:07.643555    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:07.643589    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:06.755941    3891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:18:06.759978    3891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:18:06.759983    3891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 04:18:06.759989    3891 sshutil.go:53] new ssh client: &{IP:localhost Port:50267 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/running-upgrade-033000/id_rsa Username:docker}
	I0729 04:18:06.835426    3891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:18:06.840669    3891 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:18:06.840718    3891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:18:06.844596    3891 api_server.go:72] duration metric: took 104.572833ms to wait for apiserver process to appear ...
	I0729 04:18:06.844604    3891 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:18:06.844611    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:06.850040    3891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 04:18:06.873291    3891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:18:12.644809    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:12.644836    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:11.844640    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:11.844666    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:17.646432    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:17.646467    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:16.846361    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:16.846393    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:22.648460    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:22.648498    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:21.846519    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:21.846546    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:27.650586    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:27.650690    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:18:27.662665    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:18:27.662745    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:18:27.678224    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:18:27.678301    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:18:27.688916    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:18:27.688988    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:18:27.699179    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:18:27.699254    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:18:27.709416    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:18:27.709493    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:18:27.719511    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:18:27.719598    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:18:27.730085    4028 logs.go:276] 0 containers: []
	W0729 04:18:27.730096    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:18:27.730154    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:18:27.740310    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:18:27.740326    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:18:27.740332    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:18:27.753761    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:18:27.753772    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:18:27.768251    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:18:27.768261    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:18:27.782790    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:18:27.782803    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:18:27.794978    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:18:27.794989    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:18:27.834987    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:18:27.834997    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:18:27.928126    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:18:27.928153    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:18:27.942950    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:18:27.942962    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:18:27.958420    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:18:27.958438    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:18:27.973221    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:18:27.973230    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:18:27.997332    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:18:27.997342    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:18:28.008649    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:18:28.008672    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:18:28.023817    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:18:28.023827    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:18:28.035608    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:18:28.035618    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:18:28.051220    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:18:28.051231    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:18:28.055353    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:18:28.055359    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:18:28.081805    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:18:28.081817    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:18:26.846767    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:26.846840    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:30.601310    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:31.847570    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:31.847631    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:36.848141    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:36.848166    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 04:18:37.177478    3891 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 04:18:37.182568    3891 out.go:177] * Enabled addons: storage-provisioner
	I0729 04:18:35.603506    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:35.603662    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:18:35.620336    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:18:35.620414    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:18:35.633165    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:18:35.633240    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:18:35.643923    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:18:35.643989    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:18:35.655738    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:18:35.655819    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:18:35.666104    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:18:35.666172    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:18:35.680276    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:18:35.680341    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:18:35.690801    4028 logs.go:276] 0 containers: []
	W0729 04:18:35.690813    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:18:35.690869    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:18:35.700887    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:18:35.700903    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:18:35.700908    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:18:35.705439    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:18:35.705448    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:18:35.719226    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:18:35.719237    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:18:35.731375    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:18:35.731385    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:18:35.743440    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:18:35.743456    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:18:35.782400    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:18:35.782411    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:18:35.796966    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:18:35.796979    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:18:35.811891    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:18:35.811901    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:18:35.823534    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:18:35.823548    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:18:35.834985    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:18:35.834999    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:18:35.860634    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:18:35.860647    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:18:35.874529    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:18:35.874540    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:18:35.888669    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:18:35.888678    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:18:35.900021    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:18:35.900032    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:18:35.937545    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:18:35.937560    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:18:35.950013    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:18:35.950026    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:18:35.968417    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:18:35.968431    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:18:38.495719    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:37.191829    3891 addons.go:510] duration metric: took 30.452773584s for enable addons: enabled=[storage-provisioner]
	I0729 04:18:43.497087    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:43.497225    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:18:43.514917    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:18:43.515005    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:18:43.526119    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:18:43.526195    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:18:43.540144    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:18:43.540213    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:18:43.550537    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:18:43.550606    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:18:43.561293    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:18:43.561351    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:18:43.575137    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:18:43.575217    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:18:43.585051    4028 logs.go:276] 0 containers: []
	W0729 04:18:43.585064    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:18:43.585115    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:18:43.595751    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:18:43.595769    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:18:43.595774    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:18:43.612605    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:18:43.612618    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:18:43.624067    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:18:43.624080    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:18:43.635354    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:18:43.635370    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:18:41.849129    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:41.849174    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:43.652959    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:18:43.652971    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:18:43.668643    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:18:43.668657    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:18:43.681456    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:18:43.681467    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:18:43.712712    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:18:43.712723    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:18:43.727283    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:18:43.727299    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:18:43.742489    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:18:43.742501    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:18:43.756194    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:18:43.756205    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:18:43.780120    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:18:43.780127    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:18:43.817370    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:18:43.817377    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:18:43.821518    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:18:43.821526    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:18:43.857268    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:18:43.857284    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:18:43.871565    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:18:43.871575    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:18:43.883691    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:18:43.883702    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:18:46.397652    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:46.850288    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:46.850333    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:51.400252    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:51.400652    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:18:51.436265    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:18:51.436379    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:18:51.454849    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:18:51.454924    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:18:51.468361    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:18:51.468438    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:18:51.479515    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:18:51.479588    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:18:51.490213    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:18:51.490287    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:18:51.500782    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:18:51.500856    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:18:51.510608    4028 logs.go:276] 0 containers: []
	W0729 04:18:51.510619    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:18:51.510680    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:18:51.524019    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:18:51.524037    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:18:51.524042    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:18:51.535635    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:18:51.535646    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:18:51.553008    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:18:51.553021    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:18:51.564330    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:18:51.564340    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:18:51.606769    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:18:51.606791    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:18:51.611796    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:18:51.611805    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:18:51.649887    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:18:51.649899    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:18:51.662058    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:18:51.662071    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:18:51.673948    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:18:51.673960    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:18:51.698332    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:18:51.698342    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:18:51.711824    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:18:51.711837    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:18:51.723181    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:18:51.723192    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:18:51.736442    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:18:51.736455    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:18:51.750374    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:18:51.750386    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:18:51.765266    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:18:51.765277    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:18:51.782110    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:18:51.782120    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:18:51.795150    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:18:51.795160    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:18:51.851115    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:51.851132    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:54.321627    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:56.852164    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:56.852192    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:59.322985    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:59.323354    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:18:59.354026    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:18:59.354158    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:18:59.372752    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:18:59.372836    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:18:59.386532    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:18:59.386604    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:18:59.398241    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:18:59.398307    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:18:59.409113    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:18:59.409178    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:18:59.420019    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:18:59.420089    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:18:59.430992    4028 logs.go:276] 0 containers: []
	W0729 04:18:59.431003    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:18:59.431060    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:18:59.441793    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:18:59.441808    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:18:59.441814    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:18:59.459856    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:18:59.459871    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:18:59.484757    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:18:59.484767    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:18:59.496695    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:18:59.496709    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:18:59.514001    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:18:59.514016    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:18:59.539628    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:18:59.539636    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:18:59.543876    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:18:59.543883    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:18:59.554938    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:18:59.554950    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:18:59.569721    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:18:59.569735    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:18:59.581142    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:18:59.581151    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:18:59.620482    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:18:59.620501    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:18:59.634622    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:18:59.634635    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:18:59.649263    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:18:59.649277    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:18:59.664723    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:18:59.664735    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:18:59.703142    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:18:59.703154    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:18:59.718000    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:18:59.718014    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:18:59.734329    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:18:59.734343    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:02.249037    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:01.854054    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:01.854075    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:07.251232    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:07.251383    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:07.265215    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:07.265296    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:07.276996    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:07.277068    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:07.287918    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:07.287989    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:07.298682    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:07.298750    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:07.309174    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:07.309253    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:07.319637    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:07.319720    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:07.330319    4028 logs.go:276] 0 containers: []
	W0729 04:19:07.330328    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:07.330382    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:07.341001    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:07.341018    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:07.341024    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:07.378342    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:07.378352    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:07.392525    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:07.392536    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:07.408018    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:07.408027    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:07.420241    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:07.420251    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:07.432962    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:07.432974    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:07.472256    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:07.472266    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:07.476417    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:07.476423    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:07.490681    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:07.490692    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:07.502719    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:07.502730    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:07.516881    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:07.516892    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:07.533947    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:07.533957    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:07.547722    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:07.547732    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:07.562180    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:07.562188    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:07.585952    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:07.585961    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:07.611483    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:07.611495    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:07.623701    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:07.623717    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:06.856119    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:06.856244    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:06.877349    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:06.877424    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:06.887735    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:06.887812    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:06.898543    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:06.898613    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:06.909211    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:06.909282    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:06.920045    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:06.920105    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:06.930733    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:06.930800    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:06.945131    3891 logs.go:276] 0 containers: []
	W0729 04:19:06.945150    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:06.945208    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:06.955447    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:06.955462    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:06.955467    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:06.970179    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:06.970189    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:06.992316    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:06.992327    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:07.017457    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:07.017465    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:07.032136    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:07.032146    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:07.045868    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:07.045878    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:07.058288    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:07.058299    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:07.069845    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:07.069855    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:07.081730    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:07.081741    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:07.116868    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:07.116876    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:07.121260    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:07.121267    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:07.159419    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:07.159430    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:07.171203    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:07.171213    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:09.685218    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:10.136900    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:14.687396    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:14.687580    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:14.699358    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:14.699435    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:14.709367    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:14.709442    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:14.719768    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:14.719837    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:14.731234    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:14.731305    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:14.747610    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:14.747682    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:14.757996    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:14.758062    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:14.768370    3891 logs.go:276] 0 containers: []
	W0729 04:19:14.768388    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:14.768450    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:14.779228    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:14.779244    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:14.779250    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:14.790367    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:14.790378    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:14.804768    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:14.804779    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:14.823032    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:14.823042    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:14.834132    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:14.834142    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:14.848383    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:14.848393    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:14.859648    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:14.859658    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:14.897004    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:14.897015    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:14.911768    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:14.911780    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:14.924230    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:14.924240    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:14.950188    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:14.950196    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:14.966470    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:14.966480    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:15.001698    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:15.001706    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:15.137505    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:15.137615    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:15.150056    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:15.150120    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:15.160991    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:15.161054    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:15.172723    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:15.172792    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:15.183832    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:15.183898    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:15.194650    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:15.194722    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:15.209505    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:15.209576    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:15.220148    4028 logs.go:276] 0 containers: []
	W0729 04:19:15.220160    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:15.220223    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:15.231011    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:15.231028    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:15.231034    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:15.246652    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:15.246661    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:15.261995    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:15.262006    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:15.274522    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:15.274531    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:15.300117    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:15.300127    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:15.339196    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:15.339208    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:15.378306    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:15.378316    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:15.403811    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:15.403822    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:15.420430    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:15.420442    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:15.433293    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:15.433304    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:15.453319    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:15.453330    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:15.466431    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:15.466441    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:15.483491    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:15.483500    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:15.508365    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:15.508376    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:15.520484    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:15.520501    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:15.524838    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:15.524845    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:15.541357    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:15.541368    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:18.055184    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:17.508352    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:23.057428    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:23.057559    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:23.069106    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:23.069175    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:23.080174    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:23.080249    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:23.090927    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:23.091000    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:23.101952    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:23.102022    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:23.118046    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:23.118111    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:23.133934    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:23.134009    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:23.145308    4028 logs.go:276] 0 containers: []
	W0729 04:19:23.145318    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:23.145393    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:23.156286    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:23.156304    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:23.156310    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:23.175570    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:23.175586    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:23.191654    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:23.191668    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:23.203393    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:23.203405    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:23.215770    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:23.215784    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:23.251389    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:23.251400    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:23.267714    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:23.267724    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:23.282779    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:23.282790    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:23.294728    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:23.294739    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:23.318430    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:23.318438    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:23.355948    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:23.355959    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:23.368333    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:23.368349    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:23.384549    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:23.384562    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:23.402202    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:23.402213    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:23.416482    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:23.416493    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:23.441467    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:23.441479    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:23.453830    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:23.453842    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:22.510500    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:22.510677    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:22.523280    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:22.523361    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:22.534355    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:22.534429    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:22.545001    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:22.545076    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:22.556922    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:22.556991    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:22.567224    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:22.567297    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:22.577565    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:22.577631    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:22.588024    3891 logs.go:276] 0 containers: []
	W0729 04:19:22.588037    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:22.588098    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:22.598689    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:22.598706    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:22.598712    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:22.603159    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:22.603168    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:22.641121    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:22.641134    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:22.656642    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:22.656653    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:22.671760    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:22.671771    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:22.691141    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:22.691153    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:22.706446    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:22.706460    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:22.718434    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:22.718445    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:22.735837    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:22.735846    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:22.747090    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:22.747100    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:22.770759    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:22.770768    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:22.804797    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:22.804809    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:22.819821    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:22.819831    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:25.333433    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:25.960028    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:30.336049    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:30.336346    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:30.365256    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:30.365392    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:30.395537    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:30.395621    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:30.408530    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:30.408598    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:30.420099    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:30.420172    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:30.431534    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:30.431605    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:30.443052    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:30.443120    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:30.454181    3891 logs.go:276] 0 containers: []
	W0729 04:19:30.454197    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:30.454261    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:30.465218    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:30.465235    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:30.465240    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:30.479902    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:30.479912    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:30.503806    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:30.503816    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:30.539187    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:30.539199    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:30.556281    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:30.556292    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:30.567863    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:30.567874    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:30.579405    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:30.579415    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:30.597501    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:30.597510    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:30.609168    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:30.609178    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:30.620822    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:30.620832    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:30.656750    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:30.656768    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:30.661462    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:30.661469    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:30.675980    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:30.675990    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:30.962109    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:30.962259    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:30.984865    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:30.984937    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:31.004615    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:31.004695    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:31.019157    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:31.019227    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:31.030000    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:31.030071    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:31.040506    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:31.040571    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:31.056492    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:31.056559    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:31.066406    4028 logs.go:276] 0 containers: []
	W0729 04:19:31.066417    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:31.066475    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:31.076666    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:31.076684    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:31.076689    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:31.090708    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:31.090720    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:31.105210    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:31.105223    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:31.122993    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:31.123003    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:31.147896    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:31.147903    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:31.186502    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:31.186513    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:31.198431    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:31.198444    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:31.210211    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:31.210222    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:31.224695    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:31.224708    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:31.249056    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:31.249068    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:31.260946    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:31.260960    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:31.265157    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:31.265165    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:31.276458    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:31.276471    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:31.290904    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:31.290915    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:31.305393    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:31.305406    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:31.317336    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:31.317349    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:31.328178    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:31.328189    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:33.189831    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:33.872754    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:38.191995    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:38.192180    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:38.207473    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:38.207552    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:38.220560    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:38.220634    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:38.232632    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:38.232704    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:38.243058    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:38.243131    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:38.257623    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:38.257706    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:38.268371    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:38.268427    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:38.278503    3891 logs.go:276] 0 containers: []
	W0729 04:19:38.278515    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:38.278571    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:38.288757    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:38.288772    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:38.288778    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:38.300192    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:38.300206    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:38.314606    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:38.314614    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:38.332495    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:38.332506    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:38.344510    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:38.344520    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:38.356130    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:38.356140    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:38.368211    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:38.368224    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:38.385594    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:38.385605    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:38.420694    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:38.420703    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:38.425228    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:38.425235    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:38.460631    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:38.460645    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:38.475985    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:38.475995    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:38.501154    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:38.501165    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:41.014656    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:38.875235    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:38.875408    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:38.894526    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:38.894626    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:38.908449    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:38.908522    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:38.920445    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:38.920514    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:38.930985    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:38.931052    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:38.946007    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:38.946081    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:38.957138    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:38.957207    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:38.970038    4028 logs.go:276] 0 containers: []
	W0729 04:19:38.970049    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:38.970109    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:38.980417    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:38.980437    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:38.980442    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:38.998226    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:38.998236    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:39.009633    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:39.009645    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:39.024005    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:39.024018    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:39.069484    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:39.069494    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:39.093726    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:39.093739    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:39.108185    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:39.108196    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:39.119580    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:39.119610    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:39.135146    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:39.135157    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:39.150398    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:39.150408    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:39.162237    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:39.162247    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:39.166878    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:39.166884    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:39.178731    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:39.178743    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:39.190162    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:39.190171    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:39.214921    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:39.214931    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:39.232827    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:39.232837    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:39.244091    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:39.244108    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:41.782887    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:46.015683    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:46.015799    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:46.030102    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:46.030172    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:46.042335    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:46.042406    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:46.052976    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:46.053043    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:46.064069    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:46.064132    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:46.074577    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:46.074648    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:46.084920    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:46.084991    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:46.095103    3891 logs.go:276] 0 containers: []
	W0729 04:19:46.095117    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:46.095177    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:46.106249    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:46.106267    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:46.106272    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:46.117675    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:46.117685    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:46.129382    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:46.129392    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:46.143966    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:46.143976    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:46.158714    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:46.158724    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:46.183975    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:46.183983    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:46.196284    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:46.196295    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:46.229775    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:46.229783    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:46.233735    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:46.233744    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:46.247468    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:46.247478    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:46.264902    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:46.264914    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:46.276202    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:46.276211    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:46.310588    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:46.310602    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:46.785134    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:46.785304    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:46.799823    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:46.799900    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:46.813727    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:46.813794    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:46.824275    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:46.824333    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:46.834447    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:46.834517    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:46.844770    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:46.844839    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:46.855427    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:46.855489    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:46.872655    4028 logs.go:276] 0 containers: []
	W0729 04:19:46.872670    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:46.872730    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:46.884461    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:46.884479    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:46.884485    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:46.888805    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:46.888810    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:46.903067    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:46.903080    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:46.917937    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:46.917952    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:46.958539    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:46.958548    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:46.969959    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:46.969972    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:46.982910    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:46.982920    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:46.995847    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:46.995859    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:47.020556    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:47.020563    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:47.033171    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:47.033182    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:47.057091    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:47.057101    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:47.072512    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:47.072526    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:47.089805    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:47.089816    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:47.101715    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:47.101729    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:47.112457    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:47.112469    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:47.126267    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:47.126277    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:47.141160    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:47.141171    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:48.827888    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:49.677456    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:53.830660    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:53.831344    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:53.867600    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:19:53.867740    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:53.888720    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:19:53.888820    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:53.907877    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:19:53.907947    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:53.920077    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:19:53.920151    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:53.937330    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:19:53.937396    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:53.947961    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:19:53.948029    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:53.958165    3891 logs.go:276] 0 containers: []
	W0729 04:19:53.958174    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:53.958227    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:53.968100    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:19:53.968116    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:53.968121    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:54.004515    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:19:54.004526    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:19:54.018980    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:19:54.018991    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:19:54.031864    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:19:54.031876    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:19:54.047733    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:19:54.047746    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:19:54.060624    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:19:54.060634    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:19:54.078662    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:19:54.078673    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:19:54.089859    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:54.089869    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:54.112672    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:19:54.112679    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:54.124633    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:54.124644    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:54.159861    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:54.159868    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:54.164859    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:19:54.164868    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:19:54.178833    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:19:54.178843    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:19:56.692049    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:54.679556    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:54.679680    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:54.694128    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:54.694204    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:54.704505    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:54.704573    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:54.715041    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:54.715103    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:54.725636    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:54.725700    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:54.736352    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:54.736422    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:54.751568    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:54.751639    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:54.761408    4028 logs.go:276] 0 containers: []
	W0729 04:19:54.761419    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:54.761478    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:54.771502    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:54.771521    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:54.771527    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:54.808160    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:54.808174    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:54.827452    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:54.827462    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:54.839291    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:54.839303    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:54.855278    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:54.855288    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:54.871969    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:54.871981    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:54.910256    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:54.910265    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:54.929456    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:54.929468    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:54.944911    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:54.944924    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:54.962257    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:54.962268    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:54.973701    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:54.973713    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:54.999205    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:54.999214    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:55.003269    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:55.003277    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:55.017602    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:55.017613    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:55.042928    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:55.042939    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:55.059262    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:55.059277    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:55.074720    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:55.074732    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:57.588300    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:01.694179    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:01.694352    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:01.709779    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:01.709848    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:02.590389    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:02.590584    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:02.613254    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:02.613361    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:02.629273    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:02.629359    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:02.644457    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:02.644532    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:02.656196    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:02.656272    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:02.666671    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:02.666737    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:02.677547    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:02.677619    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:02.687575    4028 logs.go:276] 0 containers: []
	W0729 04:20:02.687588    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:02.687654    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:02.701060    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:02.701079    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:02.701085    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:02.738668    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:02.738679    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:02.751435    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:02.751447    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:02.766172    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:02.766185    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:02.777712    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:02.777725    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:02.792837    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:02.792849    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:02.807574    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:02.807586    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:02.819297    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:02.819312    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:02.843666    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:02.843673    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:02.882886    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:02.882899    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:02.887417    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:02.887424    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:02.901097    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:02.901110    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:02.926657    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:02.926670    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:02.940980    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:02.940991    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:02.954813    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:02.954823    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:02.972164    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:02.972175    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:02.986602    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:02.986612    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:01.720347    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:01.720418    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:01.735131    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:20:01.735203    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:01.746202    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:01.746266    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:01.756894    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:01.756959    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:01.768568    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:01.768638    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:01.778638    3891 logs.go:276] 0 containers: []
	W0729 04:20:01.778652    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:01.778710    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:01.789361    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:01.789375    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:01.789379    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:01.801026    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:01.801035    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:01.824163    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:01.824171    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:01.835454    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:01.835467    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:01.852017    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:01.852027    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:01.887998    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:01.888012    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:01.908570    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:01.908580    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:01.922532    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:01.922542    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:01.934505    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:01.934516    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:01.949625    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:01.949635    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:01.961360    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:01.961373    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:01.996455    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:01.996465    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:02.000869    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:02.000879    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:04.514193    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:05.500131    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:09.516403    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:09.516555    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:09.529326    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:09.529402    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:09.540203    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:09.540277    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:09.550604    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:20:09.550674    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:09.561705    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:09.561775    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:09.572437    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:09.572506    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:09.582958    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:09.583025    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:09.593631    3891 logs.go:276] 0 containers: []
	W0729 04:20:09.593643    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:09.593699    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:09.604085    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:09.604100    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:09.604105    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:09.618335    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:09.618346    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:09.634873    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:09.634884    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:09.646832    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:09.646844    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:09.659444    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:09.659455    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:09.682848    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:09.682859    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:09.694189    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:09.694199    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:09.712707    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:09.712717    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:09.746411    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:09.746419    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:09.750850    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:09.750860    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:09.821181    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:09.821196    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:09.835232    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:09.835242    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:09.847943    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:09.847956    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:10.502415    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:10.502596    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:10.520192    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:10.520286    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:10.533716    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:10.533804    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:10.545304    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:10.545380    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:10.556189    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:10.556278    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:10.566676    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:10.566748    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:10.577440    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:10.577547    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:10.588131    4028 logs.go:276] 0 containers: []
	W0729 04:20:10.588142    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:10.588206    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:10.598315    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:10.598335    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:10.598340    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:10.623823    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:10.623836    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:10.637448    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:10.637462    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:10.654712    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:10.654723    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:10.694331    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:10.694339    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:10.708829    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:10.708840    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:10.720554    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:10.720564    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:10.745446    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:10.745456    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:10.756849    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:10.756859    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:10.768112    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:10.768123    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:10.792687    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:10.792701    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:10.805039    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:10.805049    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:10.809469    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:10.809474    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:10.844816    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:10.844827    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:10.859085    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:10.859095    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:10.870697    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:10.870710    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:10.893823    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:10.893834    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:13.411965    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:12.364722    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:18.414604    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:18.414852    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:18.446056    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:18.446162    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:18.463064    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:18.463147    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:18.476048    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:18.476123    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:18.487570    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:18.487641    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:18.497834    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:18.497897    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:18.508524    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:18.508595    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:18.518124    4028 logs.go:276] 0 containers: []
	W0729 04:20:18.518138    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:18.518191    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:18.528634    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:18.528652    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:18.528657    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:18.540383    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:18.540397    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:18.558323    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:18.558334    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:18.581991    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:18.582000    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:18.621365    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:18.621374    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:18.625629    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:18.625636    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:17.365683    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:17.365886    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:17.382795    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:17.382882    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:17.396009    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:17.396077    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:17.407275    3891 logs.go:276] 2 containers: [f6b883d29008 ba79364733a5]
	I0729 04:20:17.407341    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:17.417601    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:17.417666    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:17.428090    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:17.428163    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:17.438992    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:17.439055    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:17.449126    3891 logs.go:276] 0 containers: []
	W0729 04:20:17.449137    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:17.449197    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:17.459589    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:17.459603    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:17.459610    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:17.474343    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:17.474353    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:17.488395    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:17.488407    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:17.503042    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:17.503051    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:17.521020    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:17.521032    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:17.556049    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:17.556056    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:17.560403    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:17.560412    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:17.597079    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:17.597092    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:17.609093    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:17.609104    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:17.620390    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:17.620405    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:17.632577    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:17.632588    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:17.652887    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:17.652901    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:17.679629    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:17.679647    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:20.195461    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:18.639878    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:18.639891    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:18.655323    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:18.655333    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:18.667799    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:18.667812    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:18.682127    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:18.682137    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:18.693996    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:18.694011    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:18.711868    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:18.711879    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:18.747759    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:18.747771    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:18.773296    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:18.773308    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:18.787084    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:18.787095    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:18.798874    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:18.798889    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:18.813225    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:18.813235    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:21.327371    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:25.197767    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:25.198141    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:25.232438    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:25.232569    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:25.251844    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:25.251940    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:25.266567    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:20:25.266642    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:25.278770    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:25.278841    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:25.289628    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:25.289701    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:25.305534    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:25.305599    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:25.316777    3891 logs.go:276] 0 containers: []
	W0729 04:20:25.316790    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:25.316848    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:25.330060    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:25.330076    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:25.330083    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:25.369024    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:20:25.369036    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:20:25.380406    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:20:25.380418    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:20:25.391683    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:25.391694    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:25.416596    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:25.416606    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:25.450972    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:25.450979    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:25.463484    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:25.463494    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:25.475335    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:25.475343    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:25.493015    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:25.493031    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:25.505099    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:25.505109    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:25.509285    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:25.509293    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:25.523314    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:25.523326    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:25.537358    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:25.537367    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:25.555149    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:25.555160    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:25.567025    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:25.567036    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:26.329490    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:26.329621    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:26.356093    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:26.356173    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:26.369797    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:26.369868    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:26.380622    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:26.380692    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:26.391343    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:26.391422    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:26.402205    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:26.402273    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:26.414399    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:26.414474    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:26.424684    4028 logs.go:276] 0 containers: []
	W0729 04:20:26.424695    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:26.424756    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:26.435681    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:26.435698    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:26.435703    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:26.473156    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:26.473168    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:26.508021    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:26.508034    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:26.532882    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:26.532894    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:26.547183    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:26.547196    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:26.561960    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:26.561972    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:26.577148    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:26.577161    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:26.588673    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:26.588684    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:26.600197    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:26.600210    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:26.620162    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:26.620173    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:26.643467    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:26.643478    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:26.654874    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:26.654885    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:26.659177    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:26.659184    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:26.673127    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:26.673137    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:26.684493    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:26.684506    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:26.696006    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:26.696016    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:26.720775    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:26.720790    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:28.080517    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:29.235248    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:33.082668    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:33.082918    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:33.108201    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:33.108314    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:33.126087    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:33.126167    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:33.139370    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:20:33.139460    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:33.150597    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:33.150673    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:33.161244    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:33.161329    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:33.171873    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:33.171945    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:33.182810    3891 logs.go:276] 0 containers: []
	W0729 04:20:33.182820    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:33.182879    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:33.193389    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:33.193407    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:33.193414    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:33.207098    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:20:33.207108    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:20:33.218768    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:20:33.218777    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:20:33.230316    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:33.230327    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:33.247708    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:33.247716    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:33.272888    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:33.272896    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:33.307349    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:33.307356    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:33.321971    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:33.321984    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:33.334285    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:33.334299    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:33.349057    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:33.349068    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:33.364199    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:33.364212    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:33.368773    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:33.368784    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:33.381432    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:33.381444    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:33.420835    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:33.420848    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:33.438508    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:33.438522    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:35.951958    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:34.237989    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:34.238471    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:34.280544    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:34.280678    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:34.305803    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:34.305887    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:34.319736    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:34.319811    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:34.331006    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:34.331075    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:34.341767    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:34.341838    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:34.352865    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:34.352936    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:34.363655    4028 logs.go:276] 0 containers: []
	W0729 04:20:34.363667    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:34.363732    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:34.377457    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:34.377475    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:34.377481    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:34.388933    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:34.388946    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:34.412147    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:34.412160    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:34.424838    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:34.424851    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:34.464305    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:34.464314    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:34.482712    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:34.482723    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:34.497125    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:34.497135    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:34.508941    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:34.508951    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:34.526610    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:34.526620    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:34.538629    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:34.538640    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:34.542773    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:34.542783    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:34.558200    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:34.558210    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:34.570422    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:34.570434    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:34.585800    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:34.585810    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:34.599308    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:34.599324    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:34.638072    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:34.638088    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:34.663975    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:34.663986    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:37.181305    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:40.953103    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:40.953298    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:40.972544    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:40.972628    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:40.985980    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:40.986047    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:40.997493    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:20:40.997573    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:41.008848    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:41.008921    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:41.019134    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:41.019203    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:41.029947    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:41.030013    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:41.040203    3891 logs.go:276] 0 containers: []
	W0729 04:20:41.040215    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:41.040273    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:41.050998    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:41.051013    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:41.051019    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:41.065315    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:41.065325    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:41.076730    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:41.076740    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:41.099995    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:41.100004    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:41.111425    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:41.111436    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:41.125142    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:41.125151    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:41.136926    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:41.136934    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:41.172141    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:20:41.172150    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:20:41.183879    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:41.183892    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:41.195354    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:41.195366    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:41.209837    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:41.209849    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:41.227607    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:41.227618    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:41.232176    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:41.232186    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:41.268036    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:20:41.268046    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:20:41.279452    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:41.279464    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:42.183645    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:42.183973    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:42.217900    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:42.218033    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:42.238073    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:42.238166    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:42.252074    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:42.252151    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:42.268097    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:42.268170    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:42.279132    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:42.279201    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:42.290243    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:42.290313    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:42.301113    4028 logs.go:276] 0 containers: []
	W0729 04:20:42.301124    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:42.301184    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:42.312265    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:42.312284    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:42.312290    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:42.352607    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:42.352619    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:42.357158    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:42.357165    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:42.392031    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:42.392043    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:42.403703    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:42.403716    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:42.419793    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:42.419805    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:42.445366    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:42.445381    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:42.464777    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:42.464787    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:42.476509    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:42.476520    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:42.487754    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:42.487768    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:42.499715    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:42.499731    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:42.518891    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:42.518902    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:42.534598    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:42.534613    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:42.546492    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:42.546502    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:42.564929    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:42.564942    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:42.578698    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:42.578710    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:42.590304    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:42.590319    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:43.790929    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:45.115113    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:48.793206    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:48.793495    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:48.823345    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:48.823470    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:48.842272    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:48.842361    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:48.855953    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:20:48.856031    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:48.871989    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:48.872055    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:48.882841    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:48.882918    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:48.894180    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:48.894250    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:48.904761    3891 logs.go:276] 0 containers: []
	W0729 04:20:48.904774    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:48.904833    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:48.915436    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:48.915453    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:48.915459    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:48.951481    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:48.951489    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:48.965668    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:20:48.965677    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:20:48.976983    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:48.976997    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:48.994107    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:20:48.994117    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:20:49.005416    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:49.005430    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:49.020065    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:49.020076    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:49.033105    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:49.033114    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:49.048050    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:49.048061    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:49.052471    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:49.052481    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:49.075727    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:49.075738    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:49.086930    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:49.086941    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:49.121878    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:49.121891    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:49.135361    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:49.135372    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:49.147201    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:49.147215    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:51.660812    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:50.117391    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:50.117668    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:50.147381    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:50.147516    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:50.166013    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:50.166112    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:50.180591    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:50.180666    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:50.194967    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:50.195047    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:50.205530    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:50.205601    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:50.216334    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:50.216402    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:50.231073    4028 logs.go:276] 0 containers: []
	W0729 04:20:50.231084    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:50.231143    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:50.241623    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:50.241642    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:50.241648    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:50.256350    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:50.256360    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:50.277294    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:50.277305    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:50.300348    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:50.300361    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:50.324141    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:50.324152    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:50.336221    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:50.336232    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:50.377496    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:50.377508    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:50.381943    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:50.381954    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:50.416818    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:50.416832    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:50.431974    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:50.431987    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:50.443910    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:50.443920    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:50.458736    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:50.458747    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:50.483931    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:50.483942    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:50.502364    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:50.502374    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:50.530124    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:50.530141    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:50.559537    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:50.559550    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:50.571528    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:50.571543    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:53.085054    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:56.662930    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:56.663084    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:56.676994    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:20:56.677071    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:56.688264    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:20:56.688333    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:56.698760    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:20:56.698823    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:56.708820    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:20:56.708895    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:58.087189    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:58.087322    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:58.104564    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:58.104651    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:58.117504    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:58.117581    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:58.128767    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:58.128838    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:58.139677    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:58.139748    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:58.149933    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:58.150004    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:58.160883    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:58.160956    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:58.171063    4028 logs.go:276] 0 containers: []
	W0729 04:20:58.171073    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:58.171130    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:58.181346    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:58.181363    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:58.181368    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:58.192574    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:58.192586    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:58.230402    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:58.230410    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:58.245020    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:58.245034    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:58.260261    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:58.260274    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:58.277662    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:58.277675    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:58.292236    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:58.292248    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:58.305734    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:58.305747    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:58.317530    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:58.317540    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:58.329135    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:58.329146    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:58.340600    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:58.340610    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:58.362960    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:58.362971    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:58.367165    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:58.367171    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:58.403011    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:58.403021    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:58.417880    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:58.417891    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:58.442561    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:58.442571    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:58.454173    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:58.454184    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:56.719734    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:20:56.719803    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:56.730240    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:20:56.730304    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:56.745703    3891 logs.go:276] 0 containers: []
	W0729 04:20:56.745718    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:56.745781    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:56.756680    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:20:56.756696    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:20:56.756702    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:20:56.772185    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:20:56.772196    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:20:56.788171    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:20:56.788181    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:20:56.805679    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:56.805690    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:56.839103    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:20:56.839112    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:20:56.851256    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:20:56.851267    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:56.862843    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:56.862855    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:56.898289    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:20:56.898299    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:20:56.918358    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:20:56.918369    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:20:56.932846    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:20:56.932859    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:20:56.947876    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:56.947888    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:56.952588    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:20:56.952598    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:20:56.967872    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:20:56.967888    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:20:56.982387    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:20:56.982399    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:20:56.995087    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:56.995096    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:59.522528    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:00.968301    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:04.523617    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:04.523886    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:04.553249    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:04.553372    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:04.569867    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:04.569956    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:04.583659    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:04.583731    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:04.594704    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:04.594774    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:04.606934    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:04.606999    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:04.619692    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:04.619759    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:04.630356    3891 logs.go:276] 0 containers: []
	W0729 04:21:04.630370    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:04.630431    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:04.641490    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:04.641508    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:04.641514    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:04.658040    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:04.658051    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:04.673358    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:04.673368    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:04.684876    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:04.684889    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:04.701653    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:04.701668    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:04.712962    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:04.712976    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:04.725383    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:04.725392    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:04.740366    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:04.740378    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:04.745118    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:04.745125    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:04.777693    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:04.777707    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:04.798870    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:04.798882    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:04.810832    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:04.810843    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:04.834563    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:04.834571    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:04.867403    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:04.867413    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:04.902565    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:04.902576    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:05.970560    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:05.970720    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:05.984710    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:21:05.984783    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:05.998479    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:21:05.998547    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:06.008837    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:21:06.008904    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:06.019066    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:21:06.019139    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:06.029823    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:21:06.029889    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:06.040516    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:21:06.040589    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:06.057010    4028 logs.go:276] 0 containers: []
	W0729 04:21:06.057028    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:06.057090    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:06.067998    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:21:06.068020    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:06.068025    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:06.072706    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:21:06.072712    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:21:06.085868    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:21:06.085881    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:21:06.099252    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:21:06.099263    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:21:06.113526    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:21:06.113541    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:21:06.127974    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:21:06.127987    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:21:06.139833    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:06.139845    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:06.175023    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:21:06.175035    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:21:06.186522    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:21:06.186531    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:21:06.206879    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:21:06.206889    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:21:06.232687    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:21:06.232699    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:21:06.244577    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:21:06.244588    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:06.256323    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:06.256336    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:06.293517    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:21:06.293526    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:21:06.304925    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:21:06.304937    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:21:06.323967    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:21:06.323977    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:21:06.335684    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:06.335695    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:07.418916    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:08.860491    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:12.421249    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:12.421496    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:12.445221    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:12.445327    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:12.461345    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:12.461432    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:12.474284    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:12.474371    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:12.485066    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:12.485137    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:12.495467    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:12.495532    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:12.506453    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:12.506521    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:12.516962    3891 logs.go:276] 0 containers: []
	W0729 04:21:12.516976    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:12.517032    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:12.527726    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:12.527742    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:12.527748    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:12.539379    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:12.539389    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:12.558881    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:12.558890    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:12.594351    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:12.594362    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:12.606625    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:12.606635    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:12.619449    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:12.619459    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:12.656372    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:12.656401    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:12.671210    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:12.671220    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:12.682572    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:12.682584    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:12.694819    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:12.694831    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:12.712876    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:12.712886    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:12.736692    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:12.736701    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:12.741645    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:12.741652    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:12.753103    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:12.753111    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:12.764385    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:12.764394    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:15.281101    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:13.862700    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:13.863023    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:13.892677    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:21:13.892813    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:13.911781    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:21:13.911883    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:13.925943    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:21:13.926023    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:13.942639    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:21:13.942719    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:13.953911    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:21:13.953984    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:13.966126    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:21:13.966194    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:13.976140    4028 logs.go:276] 0 containers: []
	W0729 04:21:13.976150    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:13.976204    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:13.987802    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:21:13.987820    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:13.987825    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:14.022734    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:21:14.022748    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:21:14.036912    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:21:14.036922    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:21:14.048712    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:21:14.048724    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:21:14.073562    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:21:14.073574    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:21:14.092394    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:21:14.092408    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:21:14.103602    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:21:14.103613    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:14.115606    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:14.115617    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:14.153000    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:14.153008    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:14.156923    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:21:14.156931    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:21:14.170367    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:21:14.170377    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:21:14.185458    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:21:14.185493    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:21:14.224305    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:21:14.224318    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:21:14.236230    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:21:14.236242    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:21:14.250417    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:21:14.250429    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:21:14.261615    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:21:14.261626    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:21:14.285230    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:14.285240    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:16.810397    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:20.283592    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:20.284042    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:20.316956    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:20.317091    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:20.336835    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:20.336924    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:20.351929    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:20.352011    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:20.363580    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:20.363656    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:20.374122    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:20.374188    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:20.385637    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:20.385705    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:20.396429    3891 logs.go:276] 0 containers: []
	W0729 04:21:20.396440    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:20.396505    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:20.406809    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:20.406828    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:20.406833    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:20.427446    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:20.427456    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:20.445106    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:20.445118    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:20.469777    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:20.469784    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:20.490572    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:20.490582    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:20.505245    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:20.505255    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:20.518424    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:20.518438    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:20.530136    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:20.530150    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:20.571340    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:20.571355    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:20.583324    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:20.583334    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:20.587732    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:20.587738    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:20.599657    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:20.599668    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:20.611968    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:20.611977    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:20.626925    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:20.626938    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:20.638910    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:20.638920    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:21.812510    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:21.812604    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:21.827738    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:21:21.827811    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:21.838305    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:21:21.838374    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:21.848684    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:21:21.848752    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:21.859193    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:21:21.859270    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:21.869791    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:21:21.869863    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:21.880036    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:21:21.880101    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:21.889731    4028 logs.go:276] 0 containers: []
	W0729 04:21:21.889742    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:21.889803    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:21.900535    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:21:21.900554    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:21:21.900560    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:21:21.914745    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:21.914757    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:21.953021    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:21:21.953034    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:21:21.967695    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:21:21.967706    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:21:21.979570    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:21:21.979582    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:21:21.990676    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:21:21.990687    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:21:22.002269    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:21:22.002283    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:21:22.024998    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:21:22.025008    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:21:22.043368    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:21:22.043381    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:21:22.067782    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:21:22.067796    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:21:22.081986    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:21:22.081999    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:21:22.093916    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:21:22.093926    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:21:22.118853    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:21:22.118863    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:21:22.130021    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:22.130034    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:22.151274    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:22.151282    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:22.188141    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:22.188149    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:22.192366    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:21:22.192376    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:23.175973    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:24.706003    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:29.708269    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:29.708356    4028 kubeadm.go:597] duration metric: took 4m3.869230542s to restartPrimaryControlPlane
	W0729 04:21:29.708456    4028 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 04:21:29.708493    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 04:21:30.725253    4028 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.016776375s)
	I0729 04:21:30.725317    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 04:21:30.730555    4028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:21:30.733574    4028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:21:30.736220    4028 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:21:30.736225    4028 kubeadm.go:157] found existing configuration files:
	
	I0729 04:21:30.736245    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0729 04:21:30.739036    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:21:30.739069    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:21:30.742740    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0729 04:21:30.745895    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:21:30.745941    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:21:30.748919    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0729 04:21:30.751643    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:21:30.751669    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:21:30.755061    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0729 04:21:30.757671    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:21:30.757696    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:21:30.760529    4028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 04:21:30.778738    4028 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 04:21:30.778890    4028 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 04:21:30.825849    4028 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 04:21:30.825909    4028 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 04:21:30.825970    4028 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 04:21:30.877160    4028 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 04:21:30.881140    4028 out.go:204]   - Generating certificates and keys ...
	I0729 04:21:30.881173    4028 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 04:21:30.881205    4028 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 04:21:30.881241    4028 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 04:21:30.881278    4028 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 04:21:30.881328    4028 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 04:21:30.881402    4028 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 04:21:30.881431    4028 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 04:21:30.881466    4028 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 04:21:30.881502    4028 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 04:21:30.881540    4028 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 04:21:30.881562    4028 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 04:21:30.881590    4028 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 04:21:30.955454    4028 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 04:21:31.011445    4028 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 04:21:31.154520    4028 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 04:21:31.240640    4028 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 04:21:31.269941    4028 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 04:21:31.270330    4028 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 04:21:31.270353    4028 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 04:21:31.352673    4028 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 04:21:28.178207    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:28.178418    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:28.201759    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:28.201877    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:28.223658    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:28.223737    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:28.236378    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:28.236450    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:28.247102    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:28.247163    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:28.257154    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:28.257212    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:28.267638    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:28.267708    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:28.277861    3891 logs.go:276] 0 containers: []
	W0729 04:21:28.277871    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:28.277923    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:28.288261    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:28.288279    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:28.288284    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:28.300246    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:28.300261    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:28.319042    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:28.319053    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:28.336264    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:28.336277    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:28.347923    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:28.347935    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:28.372941    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:28.372952    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:28.384796    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:28.384810    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:28.389725    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:28.389735    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:28.433302    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:28.433315    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:28.445084    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:28.445098    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:28.461608    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:28.461620    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:28.475839    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:28.475849    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:28.487175    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:28.487187    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:28.520141    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:28.520149    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:28.532710    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:28.532718    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:31.045821    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:31.355922    4028 out.go:204]   - Booting up control plane ...
	I0729 04:21:31.355967    4028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 04:21:31.356005    4028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 04:21:31.356045    4028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 04:21:31.356095    4028 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 04:21:31.356359    4028 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 04:21:36.047923    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:36.048083    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:36.058941    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:36.059003    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:36.079408    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:36.079479    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:36.090118    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:36.090187    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:36.101208    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:36.101278    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:36.114617    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:36.114683    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:36.125278    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:36.125338    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:36.135392    3891 logs.go:276] 0 containers: []
	W0729 04:21:36.135406    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:36.135468    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:36.145966    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:36.145983    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:36.145988    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:36.160829    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:36.160842    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:36.172138    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:36.172151    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:36.190076    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:36.190090    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:36.201835    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:36.201848    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:36.206086    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:36.206092    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:36.241439    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:36.241449    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:36.253779    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:36.253790    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:36.265748    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:36.265758    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:36.276975    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:36.276987    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:36.310471    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:36.310479    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:36.327245    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:36.327254    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:36.340908    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:36.340920    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:36.358170    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:36.358183    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:36.371234    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:36.371245    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:36.358787    4028 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002114 seconds
	I0729 04:21:36.358935    4028 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 04:21:36.363546    4028 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 04:21:36.871144    4028 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 04:21:36.871253    4028 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-338000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 04:21:37.376263    4028 kubeadm.go:310] [bootstrap-token] Using token: zaydr7.hxiuzrvd5ftnnr8w
	I0729 04:21:37.382041    4028 out.go:204]   - Configuring RBAC rules ...
	I0729 04:21:37.382107    4028 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 04:21:37.382166    4028 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 04:21:37.390208    4028 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 04:21:37.391338    4028 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 04:21:37.392277    4028 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 04:21:37.394053    4028 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 04:21:37.397836    4028 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 04:21:37.543039    4028 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 04:21:37.780197    4028 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 04:21:37.780790    4028 kubeadm.go:310] 
	I0729 04:21:37.780825    4028 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 04:21:37.780853    4028 kubeadm.go:310] 
	I0729 04:21:37.780959    4028 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 04:21:37.780968    4028 kubeadm.go:310] 
	I0729 04:21:37.780980    4028 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 04:21:37.781023    4028 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 04:21:37.781100    4028 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 04:21:37.781104    4028 kubeadm.go:310] 
	I0729 04:21:37.781163    4028 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 04:21:37.781169    4028 kubeadm.go:310] 
	I0729 04:21:37.781210    4028 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 04:21:37.781217    4028 kubeadm.go:310] 
	I0729 04:21:37.781288    4028 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 04:21:37.781320    4028 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 04:21:37.781404    4028 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 04:21:37.781407    4028 kubeadm.go:310] 
	I0729 04:21:37.781443    4028 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 04:21:37.781494    4028 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 04:21:37.781501    4028 kubeadm.go:310] 
	I0729 04:21:37.781565    4028 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zaydr7.hxiuzrvd5ftnnr8w \
	I0729 04:21:37.781620    4028 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e5aa2d5aa27d88407c50ef5c55a2dae7e3993515072a6e61b6476ae55fad38d6 \
	I0729 04:21:37.781634    4028 kubeadm.go:310] 	--control-plane 
	I0729 04:21:37.781641    4028 kubeadm.go:310] 
	I0729 04:21:37.781680    4028 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 04:21:37.781683    4028 kubeadm.go:310] 
	I0729 04:21:37.781737    4028 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zaydr7.hxiuzrvd5ftnnr8w \
	I0729 04:21:37.781817    4028 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e5aa2d5aa27d88407c50ef5c55a2dae7e3993515072a6e61b6476ae55fad38d6 
	I0729 04:21:37.781884    4028 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 04:21:37.781925    4028 cni.go:84] Creating CNI manager for ""
	I0729 04:21:37.781934    4028 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:21:37.785309    4028 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 04:21:37.792311    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 04:21:37.795349    4028 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 04:21:37.800345    4028 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 04:21:37.800402    4028 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 04:21:37.800414    4028 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-338000 minikube.k8s.io/updated_at=2024_07_29T04_21_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=stopped-upgrade-338000 minikube.k8s.io/primary=true
	I0729 04:21:37.841935    4028 ops.go:34] apiserver oom_adj: -16
	I0729 04:21:37.841953    4028 kubeadm.go:1113] duration metric: took 41.592917ms to wait for elevateKubeSystemPrivileges
	I0729 04:21:37.841962    4028 kubeadm.go:394] duration metric: took 4m12.016227s to StartCluster
	I0729 04:21:37.841974    4028 settings.go:142] acquiring lock: {Name:mkb57b03ccb64deee52152ed8ac01af4d9e1ee07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:21:37.842057    4028 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:21:37.843148    4028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/kubeconfig: {Name:mkc1463454d977493e341af62af023d087f8e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:21:37.843465    4028 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:21:37.843531    4028 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 04:21:37.843586    4028 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-338000"
	I0729 04:21:37.843598    4028 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-338000"
	W0729 04:21:37.843601    4028 addons.go:243] addon storage-provisioner should already be in state true
	I0729 04:21:37.843611    4028 host.go:66] Checking if "stopped-upgrade-338000" exists ...
	I0729 04:21:37.843609    4028 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:21:37.843632    4028 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-338000"
	I0729 04:21:37.843674    4028 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-338000"
	I0729 04:21:37.844538    4028 kapi.go:59] client config for stopped-upgrade-338000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/client.key", CAFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043bc080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:21:37.844652    4028 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-338000"
	W0729 04:21:37.844665    4028 addons.go:243] addon default-storageclass should already be in state true
	I0729 04:21:37.844673    4028 host.go:66] Checking if "stopped-upgrade-338000" exists ...
	I0729 04:21:37.847309    4028 out.go:177] * Verifying Kubernetes components...
	I0729 04:21:37.847687    4028 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 04:21:37.851311    4028 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 04:21:37.851318    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	I0729 04:21:37.855230    4028 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:21:37.859358    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:21:37.863313    4028 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:21:37.863321    4028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 04:21:37.863328    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	I0729 04:21:37.946356    4028 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:21:37.951507    4028 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:21:37.951546    4028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:21:37.955746    4028 api_server.go:72] duration metric: took 112.273791ms to wait for apiserver process to appear ...
	I0729 04:21:37.955754    4028 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:21:37.955760    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:37.968973    4028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 04:21:38.002077    4028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:21:38.898500    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:42.957743    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:42.957780    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:43.899023    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:43.899170    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:43.909727    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:43.909802    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:43.920628    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:43.920698    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:43.931242    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:43.931317    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:43.942167    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:43.942229    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:43.952324    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:43.952383    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:43.962909    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:43.962998    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:43.973547    3891 logs.go:276] 0 containers: []
	W0729 04:21:43.973558    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:43.973619    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:43.984384    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:43.984401    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:43.984406    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:44.019946    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:44.019954    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:44.031762    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:44.031776    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:44.036306    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:44.036315    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:44.050551    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:44.050563    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:44.066736    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:44.066747    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:44.091247    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:44.091254    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:44.104122    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:44.104132    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:44.115864    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:44.115874    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:44.128138    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:44.128148    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:44.142584    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:44.142594    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:44.178815    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:44.178827    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:44.191086    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:44.191098    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:44.203336    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:44.203347    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:44.220127    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:44.220137    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:47.957969    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:47.958019    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:46.733823    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:52.958720    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:52.958764    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:51.735886    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:51.735999    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:51.752780    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:51.752869    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:51.766131    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:51.766207    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:51.776948    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:51.777026    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:51.790612    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:51.790685    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:51.801639    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:51.801710    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:51.812549    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:51.812624    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:51.825398    3891 logs.go:276] 0 containers: []
	W0729 04:21:51.825410    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:51.825478    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:51.836514    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:51.836533    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:51.836538    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:51.849938    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:51.849952    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:51.862911    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:51.862924    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:51.876429    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:51.876442    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:51.913488    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:51.913508    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:51.929974    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:51.929986    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:51.941966    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:51.941980    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:51.962244    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:51.962258    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:51.986870    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:51.986880    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:51.991068    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:51.991074    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:52.002297    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:52.002306    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:52.017049    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:52.017059    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:52.029693    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:52.029703    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:52.064477    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:52.064488    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:52.078547    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:52.078566    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:54.591689    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:57.959211    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:57.959252    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:59.593946    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:59.594362    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:59.629282    3891 logs.go:276] 1 containers: [e4fbff702599]
	I0729 04:21:59.629422    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:59.650698    3891 logs.go:276] 1 containers: [4588c8968ab3]
	I0729 04:21:59.650796    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:59.665955    3891 logs.go:276] 4 containers: [205cacb029f0 ffa497a17609 f6b883d29008 ba79364733a5]
	I0729 04:21:59.666039    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:59.678948    3891 logs.go:276] 1 containers: [d9635b4089bd]
	I0729 04:21:59.679017    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:59.689799    3891 logs.go:276] 1 containers: [e6ead3bdd67c]
	I0729 04:21:59.689866    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:59.707610    3891 logs.go:276] 1 containers: [ea04037e1056]
	I0729 04:21:59.707685    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:59.718361    3891 logs.go:276] 0 containers: []
	W0729 04:21:59.718372    3891 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:59.718424    3891 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:59.730401    3891 logs.go:276] 1 containers: [50922b856be2]
	I0729 04:21:59.730417    3891 logs.go:123] Gathering logs for coredns [f6b883d29008] ...
	I0729 04:21:59.730423    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b883d29008"
	I0729 04:21:59.742734    3891 logs.go:123] Gathering logs for kube-scheduler [d9635b4089bd] ...
	I0729 04:21:59.742746    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9635b4089bd"
	I0729 04:21:59.759025    3891 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:59.759036    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:59.784158    3891 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:59.784169    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:59.789071    3891 logs.go:123] Gathering logs for coredns [205cacb029f0] ...
	I0729 04:21:59.789078    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205cacb029f0"
	I0729 04:21:59.800762    3891 logs.go:123] Gathering logs for coredns [ba79364733a5] ...
	I0729 04:21:59.800777    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba79364733a5"
	I0729 04:21:59.812430    3891 logs.go:123] Gathering logs for kube-controller-manager [ea04037e1056] ...
	I0729 04:21:59.812440    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea04037e1056"
	I0729 04:21:59.830251    3891 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:59.830264    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:59.866959    3891 logs.go:123] Gathering logs for etcd [4588c8968ab3] ...
	I0729 04:21:59.866972    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4588c8968ab3"
	I0729 04:21:59.885841    3891 logs.go:123] Gathering logs for kube-proxy [e6ead3bdd67c] ...
	I0729 04:21:59.885852    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6ead3bdd67c"
	I0729 04:21:59.898242    3891 logs.go:123] Gathering logs for storage-provisioner [50922b856be2] ...
	I0729 04:21:59.898253    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50922b856be2"
	I0729 04:21:59.909687    3891 logs.go:123] Gathering logs for container status ...
	I0729 04:21:59.909697    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:59.921520    3891 logs.go:123] Gathering logs for kube-apiserver [e4fbff702599] ...
	I0729 04:21:59.921530    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fbff702599"
	I0729 04:21:59.936229    3891 logs.go:123] Gathering logs for coredns [ffa497a17609] ...
	I0729 04:21:59.936239    3891 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffa497a17609"
	I0729 04:21:59.948283    3891 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:59.948294    3891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:22:02.960374    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:02.960427    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:02.484811    3891 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:07.486931    3891 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:07.491550    3891 out.go:177] 
	W0729 04:22:07.496419    3891 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 04:22:07.496428    3891 out.go:239] * 
	W0729 04:22:07.497133    3891 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:22:07.508442    3891 out.go:177] 
	I0729 04:22:07.961355    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:07.961378    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 04:22:08.320609    4028 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 04:22:08.325914    4028 out.go:177] * Enabled addons: storage-provisioner
	I0729 04:22:08.332757    4028 addons.go:510] duration metric: took 30.490250084s for enable addons: enabled=[storage-provisioner]
	I0729 04:22:12.962533    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:12.962555    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:17.964407    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:17.964435    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-29 11:13:19 UTC, ends at Mon 2024-07-29 11:22:23 UTC. --
	Jul 29 11:22:08 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:08Z" level=error msg="ContainerStats resp: {0x40001aa3c0 linux}"
	Jul 29 11:22:08 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:08Z" level=error msg="ContainerStats resp: {0x4000599440 linux}"
	Jul 29 11:22:08 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:08Z" level=error msg="ContainerStats resp: {0x40001aa500 linux}"
	Jul 29 11:22:08 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:08Z" level=error msg="ContainerStats resp: {0x40001ab440 linux}"
	Jul 29 11:22:09 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:09Z" level=error msg="ContainerStats resp: {0x400095aac0 linux}"
	Jul 29 11:22:10 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:10Z" level=error msg="ContainerStats resp: {0x40008eb500 linux}"
	Jul 29 11:22:10 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:10Z" level=error msg="ContainerStats resp: {0x4000598f80 linux}"
	Jul 29 11:22:10 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:10Z" level=error msg="ContainerStats resp: {0x4000599580 linux}"
	Jul 29 11:22:10 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:10Z" level=error msg="ContainerStats resp: {0x400019c740 linux}"
	Jul 29 11:22:10 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:10Z" level=error msg="ContainerStats resp: {0x400019cf80 linux}"
	Jul 29 11:22:10 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:10Z" level=error msg="ContainerStats resp: {0x40008fc100 linux}"
	Jul 29 11:22:10 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:10Z" level=error msg="ContainerStats resp: {0x40001aa5c0 linux}"
	Jul 29 11:22:10 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:10Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 11:22:15 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:15Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 11:22:20 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:20Z" level=error msg="ContainerStats resp: {0x40003a1c00 linux}"
	Jul 29 11:22:20 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:20Z" level=error msg="ContainerStats resp: {0x40008fdf40 linux}"
	Jul 29 11:22:20 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:20Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 11:22:21 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:21Z" level=error msg="ContainerStats resp: {0x400052fb00 linux}"
	Jul 29 11:22:22 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:22Z" level=error msg="ContainerStats resp: {0x400009c540 linux}"
	Jul 29 11:22:22 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:22Z" level=error msg="ContainerStats resp: {0x4000770780 linux}"
	Jul 29 11:22:22 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:22Z" level=error msg="ContainerStats resp: {0x4000770940 linux}"
	Jul 29 11:22:22 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:22Z" level=error msg="ContainerStats resp: {0x4000770c80 linux}"
	Jul 29 11:22:22 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:22Z" level=error msg="ContainerStats resp: {0x4000771040 linux}"
	Jul 29 11:22:22 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:22Z" level=error msg="ContainerStats resp: {0x40007ecac0 linux}"
	Jul 29 11:22:22 running-upgrade-033000 cri-dockerd[2711]: time="2024-07-29T11:22:22Z" level=error msg="ContainerStats resp: {0x40003a1240 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	f962e28bf22b4       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   aa1cbd04c166c
	b7c8417ab8c5d       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   5cbb18927b20e
	205cacb029f0a       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5cbb18927b20e
	ffa497a176096       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   aa1cbd04c166c
	50922b856be2e       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   c64ceba5e8703
	e6ead3bdd67cc       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   2b205bd018ab6
	d9635b4089bda       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   2a33b36fd75a5
	4588c8968ab35       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   5ae5f1a900be3
	e4fbff7025996       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   13ac4f4908ef9
	ea04037e10568       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   6259a8a0acd2b
	
	
	==> coredns [205cacb029f0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5644047716590412256.5526010364609795095. HINFO: read udp 10.244.0.2:39994->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5644047716590412256.5526010364609795095. HINFO: read udp 10.244.0.2:33073->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5644047716590412256.5526010364609795095. HINFO: read udp 10.244.0.2:50264->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5644047716590412256.5526010364609795095. HINFO: read udp 10.244.0.2:58677->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5644047716590412256.5526010364609795095. HINFO: read udp 10.244.0.2:57018->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5644047716590412256.5526010364609795095. HINFO: read udp 10.244.0.2:33971->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b7c8417ab8c5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5099051085479335522.3944149801742773774. HINFO: read udp 10.244.0.2:36720->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5099051085479335522.3944149801742773774. HINFO: read udp 10.244.0.2:48925->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5099051085479335522.3944149801742773774. HINFO: read udp 10.244.0.2:54112->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f962e28bf22b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6088650122786282267.5596283188691419544. HINFO: read udp 10.244.0.3:57208->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6088650122786282267.5596283188691419544. HINFO: read udp 10.244.0.3:44035->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6088650122786282267.5596283188691419544. HINFO: read udp 10.244.0.3:46497->10.0.2.3:53: i/o timeout
	
	
	==> coredns [ffa497a17609] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2259846175223196895.948060203542875249. HINFO: read udp 10.244.0.3:40943->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2259846175223196895.948060203542875249. HINFO: read udp 10.244.0.3:55591->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2259846175223196895.948060203542875249. HINFO: read udp 10.244.0.3:59247->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2259846175223196895.948060203542875249. HINFO: read udp 10.244.0.3:52827->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2259846175223196895.948060203542875249. HINFO: read udp 10.244.0.3:35268->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2259846175223196895.948060203542875249. HINFO: read udp 10.244.0.3:37125->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2259846175223196895.948060203542875249. HINFO: read udp 10.244.0.3:32927->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2259846175223196895.948060203542875249. HINFO: read udp 10.244.0.3:47436->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2259846175223196895.948060203542875249. HINFO: read udp 10.244.0.3:33784->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2259846175223196895.948060203542875249. HINFO: read udp 10.244.0.3:44378->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-033000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-033000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=running-upgrade-033000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T04_18_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:18:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-033000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:22:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:18:06 +0000   Mon, 29 Jul 2024 11:18:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:18:06 +0000   Mon, 29 Jul 2024 11:18:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:18:06 +0000   Mon, 29 Jul 2024 11:18:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:18:06 +0000   Mon, 29 Jul 2024 11:18:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-033000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 5550216e84bf4220a1b5ae2c72fabdae
	  System UUID:                5550216e84bf4220a1b5ae2c72fabdae
	  Boot ID:                    7bf73634-6ad4-4278-95cb-31154f090540
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-gvknh                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-pzw4r                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-033000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-033000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-033000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-gw9z9                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-033000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-033000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-033000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-033000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-033000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-033000 event: Registered Node running-upgrade-033000 in Controller
	
	
	==> dmesg <==
	[  +1.592656] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.068024] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.065730] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.144921] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.064822] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.081143] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.207965] systemd-fstab-generator[1294]: Ignoring "noauto" for root device
	[  +9.673668] systemd-fstab-generator[1910]: Ignoring "noauto" for root device
	[  +2.773475] systemd-fstab-generator[2190]: Ignoring "noauto" for root device
	[  +0.132758] systemd-fstab-generator[2223]: Ignoring "noauto" for root device
	[  +0.084269] systemd-fstab-generator[2234]: Ignoring "noauto" for root device
	[  +0.073279] systemd-fstab-generator[2247]: Ignoring "noauto" for root device
	[  +1.372153] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.135468] systemd-fstab-generator[2668]: Ignoring "noauto" for root device
	[  +0.071644] systemd-fstab-generator[2679]: Ignoring "noauto" for root device
	[  +0.075387] systemd-fstab-generator[2690]: Ignoring "noauto" for root device
	[  +0.072400] systemd-fstab-generator[2704]: Ignoring "noauto" for root device
	[  +2.230980] systemd-fstab-generator[2857]: Ignoring "noauto" for root device
	[  +2.856156] systemd-fstab-generator[3220]: Ignoring "noauto" for root device
	[  +1.450426] systemd-fstab-generator[3564]: Ignoring "noauto" for root device
	[Jul29 11:14] kauditd_printk_skb: 68 callbacks suppressed
	[Jul29 11:17] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.182965] systemd-fstab-generator[11269]: Ignoring "noauto" for root device
	[Jul29 11:18] systemd-fstab-generator[11866]: Ignoring "noauto" for root device
	[  +0.465222] systemd-fstab-generator[11997]: Ignoring "noauto" for root device
	
	
	==> etcd [4588c8968ab3] <==
	{"level":"info","ts":"2024-07-29T11:18:01.996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-29T11:18:01.996Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-29T11:18:01.998Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T11:18:01.998Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:18:01.998Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T11:18:01.999Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T11:18:01.999Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T11:18:02.677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T11:18:02.677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T11:18:02.677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-29T11:18:02.677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:18:02.677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T11:18:02.677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T11:18:02.677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T11:18:02.678Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:18:02.678Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:18:02.678Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:18:02.678Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:18:02.678Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-033000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:18:02.678Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:18:02.678Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:18:02.679Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-29T11:18:02.679Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:18:02.679Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:18:02.680Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:22:23 up 9 min,  0 users,  load average: 0.10, 0.22, 0.10
	Linux running-upgrade-033000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [e4fbff702599] <==
	I0729 11:18:03.940937       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 11:18:03.948418       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 11:18:03.948473       1 cache.go:39] Caches are synced for autoregister controller
	I0729 11:18:03.956114       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 11:18:03.956481       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 11:18:03.956644       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 11:18:03.963370       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0729 11:18:04.700794       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 11:18:04.865660       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 11:18:04.872182       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 11:18:04.872267       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 11:18:05.018156       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 11:18:05.030820       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 11:18:05.103875       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0729 11:18:05.105820       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0729 11:18:05.106179       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 11:18:05.107387       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 11:18:05.994644       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 11:18:06.526146       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 11:18:06.529341       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0729 11:18:06.539148       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 11:18:06.588727       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 11:18:19.498343       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0729 11:18:19.647594       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0729 11:18:20.137562       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [ea04037e1056] <==
	I0729 11:18:19.043742       1 shared_informer.go:262] Caches are synced for taint
	I0729 11:18:19.043789       1 shared_informer.go:262] Caches are synced for stateful set
	I0729 11:18:19.043819       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0729 11:18:19.043858       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-033000. Assuming now as a timestamp.
	I0729 11:18:19.043889       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 11:18:19.043916       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0729 11:18:19.043952       1 shared_informer.go:262] Caches are synced for deployment
	I0729 11:18:19.044053       1 event.go:294] "Event occurred" object="running-upgrade-033000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-033000 event: Registered Node running-upgrade-033000 in Controller"
	I0729 11:18:19.044820       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0729 11:18:19.046878       1 shared_informer.go:262] Caches are synced for daemon sets
	I0729 11:18:19.051391       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 11:18:19.216497       1 shared_informer.go:262] Caches are synced for namespace
	I0729 11:18:19.217679       1 shared_informer.go:262] Caches are synced for service account
	I0729 11:18:19.223645       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0729 11:18:19.223704       1 shared_informer.go:262] Caches are synced for disruption
	I0729 11:18:19.223756       1 disruption.go:371] Sending events to api server.
	I0729 11:18:19.234255       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 11:18:19.241749       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 11:18:19.499904       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0729 11:18:19.651083       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gw9z9"
	I0729 11:18:19.654992       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 11:18:19.694855       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 11:18:19.694879       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0729 11:18:19.849855       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-gvknh"
	I0729 11:18:19.854173       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-pzw4r"
	
	
	==> kube-proxy [e6ead3bdd67c] <==
	I0729 11:18:20.123817       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0729 11:18:20.123840       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0729 11:18:20.123850       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 11:18:20.135390       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 11:18:20.135400       1 server_others.go:206] "Using iptables Proxier"
	I0729 11:18:20.135444       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 11:18:20.135608       1 server.go:661] "Version info" version="v1.24.1"
	I0729 11:18:20.135622       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:18:20.135910       1 config.go:317] "Starting service config controller"
	I0729 11:18:20.135943       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 11:18:20.136029       1 config.go:226] "Starting endpoint slice config controller"
	I0729 11:18:20.136038       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 11:18:20.136289       1 config.go:444] "Starting node config controller"
	I0729 11:18:20.136310       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 11:18:20.236112       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0729 11:18:20.236143       1 shared_informer.go:262] Caches are synced for service config
	I0729 11:18:20.237376       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [d9635b4089bd] <==
	W0729 11:18:03.900740       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:18:03.900753       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:18:03.900840       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:18:03.900903       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:18:03.901003       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:18:03.901015       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 11:18:03.901244       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:18:03.901253       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:18:03.901264       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:18:03.901268       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:18:03.901270       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:18:03.901273       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 11:18:03.901320       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:18:03.901328       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:18:03.901334       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:18:03.901337       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:18:04.730191       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:18:04.730235       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:18:04.745094       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:18:04.745126       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:18:04.943523       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:18:04.943540       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 11:18:04.943724       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:18:04.943732       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0729 11:18:05.298644       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-29 11:13:19 UTC, ends at Mon 2024-07-29 11:22:23 UTC. --
	Jul 29 11:18:08 running-upgrade-033000 kubelet[11872]: E0729 11:18:08.755340   11872 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-033000\" already exists" pod="kube-system/etcd-running-upgrade-033000"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.051169   11872 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.054791   11872 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.054919   11872 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrslj\" (UniqueName: \"kubernetes.io/projected/567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab-kube-api-access-wrslj\") pod \"storage-provisioner\" (UID: \"567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab\") " pod="kube-system/storage-provisioner"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.054932   11872 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab-tmp\") pod \"storage-provisioner\" (UID: \"567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab\") " pod="kube-system/storage-provisioner"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.055162   11872 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: E0729 11:18:19.158013   11872 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: E0729 11:18:19.158032   11872 projected.go:192] Error preparing data for projected volume kube-api-access-wrslj for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: E0729 11:18:19.158068   11872 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab-kube-api-access-wrslj podName:567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab nodeName:}" failed. No retries permitted until 2024-07-29 11:18:19.658055053 +0000 UTC m=+13.141600574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wrslj" (UniqueName: "kubernetes.io/projected/567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab-kube-api-access-wrslj") pod "storage-provisioner" (UID: "567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab") : configmap "kube-root-ca.crt" not found
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.653552   11872 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: E0729 11:18:19.660080   11872 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: E0729 11:18:19.660147   11872 projected.go:192] Error preparing data for projected volume kube-api-access-wrslj for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: E0729 11:18:19.660187   11872 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab-kube-api-access-wrslj podName:567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab nodeName:}" failed. No retries permitted until 2024-07-29 11:18:20.660177827 +0000 UTC m=+14.143723348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrslj" (UniqueName: "kubernetes.io/projected/567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab-kube-api-access-wrslj") pod "storage-provisioner" (UID: "567da1c7-68f1-44f5-8ab9-3a62eaf1c2ab") : configmap "kube-root-ca.crt" not found
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.760597   11872 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c20e7aa1-31a4-43b2-8e35-b0172874d4af-xtables-lock\") pod \"kube-proxy-gw9z9\" (UID: \"c20e7aa1-31a4-43b2-8e35-b0172874d4af\") " pod="kube-system/kube-proxy-gw9z9"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.760633   11872 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46f9w\" (UniqueName: \"kubernetes.io/projected/c20e7aa1-31a4-43b2-8e35-b0172874d4af-kube-api-access-46f9w\") pod \"kube-proxy-gw9z9\" (UID: \"c20e7aa1-31a4-43b2-8e35-b0172874d4af\") " pod="kube-system/kube-proxy-gw9z9"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.760646   11872 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c20e7aa1-31a4-43b2-8e35-b0172874d4af-lib-modules\") pod \"kube-proxy-gw9z9\" (UID: \"c20e7aa1-31a4-43b2-8e35-b0172874d4af\") " pod="kube-system/kube-proxy-gw9z9"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.760666   11872 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c20e7aa1-31a4-43b2-8e35-b0172874d4af-kube-proxy\") pod \"kube-proxy-gw9z9\" (UID: \"c20e7aa1-31a4-43b2-8e35-b0172874d4af\") " pod="kube-system/kube-proxy-gw9z9"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.852699   11872 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.858579   11872 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.962235   11872 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw5mv\" (UniqueName: \"kubernetes.io/projected/9e34c48d-dcbd-451b-a7a2-2e6b23295e66-kube-api-access-vw5mv\") pod \"coredns-6d4b75cb6d-gvknh\" (UID: \"9e34c48d-dcbd-451b-a7a2-2e6b23295e66\") " pod="kube-system/coredns-6d4b75cb6d-gvknh"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.962259   11872 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e34c48d-dcbd-451b-a7a2-2e6b23295e66-config-volume\") pod \"coredns-6d4b75cb6d-gvknh\" (UID: \"9e34c48d-dcbd-451b-a7a2-2e6b23295e66\") " pod="kube-system/coredns-6d4b75cb6d-gvknh"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.962270   11872 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/644ab7a9-1c3a-41ea-9b5c-fda3b08eba13-config-volume\") pod \"coredns-6d4b75cb6d-pzw4r\" (UID: \"644ab7a9-1c3a-41ea-9b5c-fda3b08eba13\") " pod="kube-system/coredns-6d4b75cb6d-pzw4r"
	Jul 29 11:18:19 running-upgrade-033000 kubelet[11872]: I0729 11:18:19.962280   11872 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp59p\" (UniqueName: \"kubernetes.io/projected/644ab7a9-1c3a-41ea-9b5c-fda3b08eba13-kube-api-access-jp59p\") pod \"coredns-6d4b75cb6d-pzw4r\" (UID: \"644ab7a9-1c3a-41ea-9b5c-fda3b08eba13\") " pod="kube-system/coredns-6d4b75cb6d-pzw4r"
	Jul 29 11:22:08 running-upgrade-033000 kubelet[11872]: I0729 11:22:08.975954   11872 scope.go:110] "RemoveContainer" containerID="f6b883d2900812b39655b81e042bc5465727fab7b6be07de2e70454cc76a98f0"
	Jul 29 11:22:08 running-upgrade-033000 kubelet[11872]: I0729 11:22:08.995487   11872 scope.go:110] "RemoveContainer" containerID="ba79364733a527c367735927e974f68f485b01adac31844621dc8cc66e8e8f44"
	
	
	==> storage-provisioner [50922b856be2] <==
	I0729 11:18:21.072203       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:18:21.078165       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:18:21.078617       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:18:21.082783       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:18:21.082839       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-033000_457a5634-de87-463c-bf3f-50d0230a98d6!
	I0729 11:18:21.083682       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"de0a0554-586b-4a10-9604-88152f3ae488", APIVersion:"v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-033000_457a5634-de87-463c-bf3f-50d0230a98d6 became leader
	I0729 11:18:21.183835       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-033000_457a5634-de87-463c-bf3f-50d0230a98d6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-033000 -n running-upgrade-033000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-033000 -n running-upgrade-033000: exit status 2 (15.618835458s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-033000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-033000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-033000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-033000: (1.277260167s)
--- FAIL: TestRunningBinaryUpgrade (585.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.799214333s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-325000" primary control-plane node in "kubernetes-upgrade-325000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-325000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:15:55.900939    3958 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:15:55.901051    3958 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:15:55.901056    3958 out.go:304] Setting ErrFile to fd 2...
	I0729 04:15:55.901058    3958 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:15:55.901180    3958 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:15:55.902256    3958 out.go:298] Setting JSON to false
	I0729 04:15:55.918536    3958 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2718,"bootTime":1722249037,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:15:55.918605    3958 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:15:55.923542    3958 out.go:177] * [kubernetes-upgrade-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:15:55.933531    3958 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:15:55.933564    3958 notify.go:220] Checking for updates...
	I0729 04:15:55.940461    3958 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:15:55.941786    3958 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:15:55.944484    3958 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:15:55.947511    3958 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:15:55.950530    3958 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:15:55.953929    3958 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:15:55.953997    3958 config.go:182] Loaded profile config "running-upgrade-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:15:55.954051    3958 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:15:55.958483    3958 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:15:55.965497    3958 start.go:297] selected driver: qemu2
	I0729 04:15:55.965503    3958 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:15:55.965510    3958 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:15:55.967892    3958 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:15:55.970503    3958 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:15:55.973568    3958 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:15:55.973580    3958 cni.go:84] Creating CNI manager for ""
	I0729 04:15:55.973587    3958 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:15:55.973612    3958 start.go:340] cluster config:
	{Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:15:55.977466    3958 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:15:55.985518    3958 out.go:177] * Starting "kubernetes-upgrade-325000" primary control-plane node in "kubernetes-upgrade-325000" cluster
	I0729 04:15:55.989458    3958 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:15:55.989474    3958 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:15:55.989486    3958 cache.go:56] Caching tarball of preloaded images
	I0729 04:15:55.989543    3958 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:15:55.989549    3958 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:15:55.989604    3958 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/kubernetes-upgrade-325000/config.json ...
	I0729 04:15:55.989615    3958 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/kubernetes-upgrade-325000/config.json: {Name:mk570933f4364be7c5b286ce9037f4e2ca26494d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:15:55.989942    3958 start.go:360] acquireMachinesLock for kubernetes-upgrade-325000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:15:55.989979    3958 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "kubernetes-upgrade-325000"
	I0729 04:15:55.989992    3958 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:15:55.990034    3958 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:15:55.997460    3958 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:15:56.012599    3958 start.go:159] libmachine.API.Create for "kubernetes-upgrade-325000" (driver="qemu2")
	I0729 04:15:56.012623    3958 client.go:168] LocalClient.Create starting
	I0729 04:15:56.012703    3958 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:15:56.012732    3958 main.go:141] libmachine: Decoding PEM data...
	I0729 04:15:56.012739    3958 main.go:141] libmachine: Parsing certificate...
	I0729 04:15:56.012776    3958 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:15:56.012797    3958 main.go:141] libmachine: Decoding PEM data...
	I0729 04:15:56.012806    3958 main.go:141] libmachine: Parsing certificate...
	I0729 04:15:56.013244    3958 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:15:56.164928    3958 main.go:141] libmachine: Creating SSH key...
	I0729 04:15:56.259099    3958 main.go:141] libmachine: Creating Disk image...
	I0729 04:15:56.259105    3958 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:15:56.259271    3958 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2
	I0729 04:15:56.268406    3958 main.go:141] libmachine: STDOUT: 
	I0729 04:15:56.268427    3958 main.go:141] libmachine: STDERR: 
	I0729 04:15:56.268494    3958 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2 +20000M
	I0729 04:15:56.276573    3958 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:15:56.276589    3958 main.go:141] libmachine: STDERR: 
	I0729 04:15:56.276611    3958 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2
	I0729 04:15:56.276618    3958 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:15:56.276628    3958 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:15:56.276657    3958 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:65:1d:49:40:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2
	I0729 04:15:56.278312    3958 main.go:141] libmachine: STDOUT: 
	I0729 04:15:56.278328    3958 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:15:56.278348    3958 client.go:171] duration metric: took 265.728875ms to LocalClient.Create
	I0729 04:15:58.280493    3958 start.go:128] duration metric: took 2.290505792s to createHost
	I0729 04:15:58.280599    3958 start.go:83] releasing machines lock for "kubernetes-upgrade-325000", held for 2.290683041s
	W0729 04:15:58.280657    3958 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:15:58.290958    3958 out.go:177] * Deleting "kubernetes-upgrade-325000" in qemu2 ...
	W0729 04:15:58.322959    3958 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:15:58.322998    3958 start.go:729] Will try again in 5 seconds ...
	I0729 04:16:03.325145    3958 start.go:360] acquireMachinesLock for kubernetes-upgrade-325000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:16:03.325821    3958 start.go:364] duration metric: took 539.875µs to acquireMachinesLock for "kubernetes-upgrade-325000"
	I0729 04:16:03.325992    3958 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:16:03.326288    3958 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:16:03.333920    3958 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:16:03.383213    3958 start.go:159] libmachine.API.Create for "kubernetes-upgrade-325000" (driver="qemu2")
	I0729 04:16:03.383267    3958 client.go:168] LocalClient.Create starting
	I0729 04:16:03.383389    3958 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:16:03.383460    3958 main.go:141] libmachine: Decoding PEM data...
	I0729 04:16:03.383477    3958 main.go:141] libmachine: Parsing certificate...
	I0729 04:16:03.383541    3958 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:16:03.383593    3958 main.go:141] libmachine: Decoding PEM data...
	I0729 04:16:03.383608    3958 main.go:141] libmachine: Parsing certificate...
	I0729 04:16:03.384370    3958 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:16:03.545378    3958 main.go:141] libmachine: Creating SSH key...
	I0729 04:16:03.612283    3958 main.go:141] libmachine: Creating Disk image...
	I0729 04:16:03.612293    3958 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:16:03.612496    3958 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2
	I0729 04:16:03.621889    3958 main.go:141] libmachine: STDOUT: 
	I0729 04:16:03.621905    3958 main.go:141] libmachine: STDERR: 
	I0729 04:16:03.621966    3958 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2 +20000M
	I0729 04:16:03.630029    3958 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:16:03.630046    3958 main.go:141] libmachine: STDERR: 
	I0729 04:16:03.630069    3958 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2
	I0729 04:16:03.630076    3958 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:16:03.630086    3958 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:16:03.630119    3958 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:18:44:e9:a4:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2
	I0729 04:16:03.631831    3958 main.go:141] libmachine: STDOUT: 
	I0729 04:16:03.631845    3958 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:16:03.631858    3958 client.go:171] duration metric: took 248.594709ms to LocalClient.Create
	I0729 04:16:05.633971    3958 start.go:128] duration metric: took 2.307720584s to createHost
	I0729 04:16:05.634076    3958 start.go:83] releasing machines lock for "kubernetes-upgrade-325000", held for 2.308293459s
	W0729 04:16:05.634355    3958 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-325000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-325000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:16:05.642645    3958 out.go:177] 
	W0729 04:16:05.647684    3958 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:16:05.647702    3958 out.go:239] * 
	* 
	W0729 04:16:05.649247    3958 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:16:05.663455    3958 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-325000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-325000: (3.166581167s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-325000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-325000 status --format={{.Host}}: exit status 7 (32.541667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182215542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-325000" primary control-plane node in "kubernetes-upgrade-325000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-325000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-325000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:16:08.903926    3993 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:16:08.904075    3993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:08.904080    3993 out.go:304] Setting ErrFile to fd 2...
	I0729 04:16:08.904082    3993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:08.904207    3993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:16:08.905525    3993 out.go:298] Setting JSON to false
	I0729 04:16:08.923812    3993 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2731,"bootTime":1722249037,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:16:08.923892    3993 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:16:08.928566    3993 out.go:177] * [kubernetes-upgrade-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:16:08.935623    3993 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:16:08.935721    3993 notify.go:220] Checking for updates...
	I0729 04:16:08.941573    3993 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:16:08.944562    3993 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:16:08.947553    3993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:16:08.950512    3993 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:16:08.953571    3993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:16:08.956814    3993 config.go:182] Loaded profile config "kubernetes-upgrade-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 04:16:08.957075    3993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:16:08.959528    3993 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:16:08.966527    3993 start.go:297] selected driver: qemu2
	I0729 04:16:08.966533    3993 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:16:08.966581    3993 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:16:08.968967    3993 cni.go:84] Creating CNI manager for ""
	I0729 04:16:08.968983    3993 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:16:08.969000    3993 start.go:340] cluster config:
	{Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-325000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:16:08.972724    3993 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:16:08.979548    3993 out.go:177] * Starting "kubernetes-upgrade-325000" primary control-plane node in "kubernetes-upgrade-325000" cluster
	I0729 04:16:08.983556    3993 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:16:08.983571    3993 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:16:08.983579    3993 cache.go:56] Caching tarball of preloaded images
	I0729 04:16:08.983635    3993 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:16:08.983640    3993 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 04:16:08.983687    3993 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/kubernetes-upgrade-325000/config.json ...
	I0729 04:16:08.984044    3993 start.go:360] acquireMachinesLock for kubernetes-upgrade-325000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:16:08.984072    3993 start.go:364] duration metric: took 22.292µs to acquireMachinesLock for "kubernetes-upgrade-325000"
	I0729 04:16:08.984082    3993 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:16:08.984087    3993 fix.go:54] fixHost starting: 
	I0729 04:16:08.984194    3993 fix.go:112] recreateIfNeeded on kubernetes-upgrade-325000: state=Stopped err=<nil>
	W0729 04:16:08.984201    3993 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:16:08.992551    3993 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-325000" ...
	I0729 04:16:08.996559    3993 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:16:08.996594    3993 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:18:44:e9:a4:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2
	I0729 04:16:08.998711    3993 main.go:141] libmachine: STDOUT: 
	I0729 04:16:08.998736    3993 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:16:08.998763    3993 fix.go:56] duration metric: took 14.675875ms for fixHost
	I0729 04:16:08.998766    3993 start.go:83] releasing machines lock for "kubernetes-upgrade-325000", held for 14.690166ms
	W0729 04:16:08.998772    3993 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:16:08.998801    3993 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:16:08.998805    3993 start.go:729] Will try again in 5 seconds ...
	I0729 04:16:14.000898    3993 start.go:360] acquireMachinesLock for kubernetes-upgrade-325000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:16:14.001350    3993 start.go:364] duration metric: took 365.417µs to acquireMachinesLock for "kubernetes-upgrade-325000"
	I0729 04:16:14.001468    3993 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:16:14.001483    3993 fix.go:54] fixHost starting: 
	I0729 04:16:14.002017    3993 fix.go:112] recreateIfNeeded on kubernetes-upgrade-325000: state=Stopped err=<nil>
	W0729 04:16:14.002032    3993 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:16:14.010316    3993 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-325000" ...
	I0729 04:16:14.013338    3993 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:16:14.013486    3993 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:18:44:e9:a4:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubernetes-upgrade-325000/disk.qcow2
	I0729 04:16:14.021135    3993 main.go:141] libmachine: STDOUT: 
	I0729 04:16:14.021197    3993 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:16:14.021252    3993 fix.go:56] duration metric: took 19.769958ms for fixHost
	I0729 04:16:14.021264    3993 start.go:83] releasing machines lock for "kubernetes-upgrade-325000", held for 19.9ms
	W0729 04:16:14.021401    3993 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-325000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-325000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:16:14.029376    3993 out.go:177] 
	W0729 04:16:14.032538    3993 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:16:14.032557    3993 out.go:239] * 
	* 
	W0729 04:16:14.033808    3993 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:16:14.043342    3993 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-325000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-325000 version --output=json: exit status 1 (55.327041ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-325000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-29 04:16:14.112011 -0700 PDT m=+2513.359726334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-325000 -n kubernetes-upgrade-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-325000 -n kubernetes-upgrade-325000: exit status 7 (32.31825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-325000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-325000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-325000
--- FAIL: TestKubernetesUpgrade (18.35s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.91s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19336
- KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3833633571/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.91s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19336
- KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1170720489/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (564.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3630460287 start -p stopped-upgrade-338000 --memory=2200 --vm-driver=qemu2 
E0729 04:16:23.111779    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3630460287 start -p stopped-upgrade-338000 --memory=2200 --vm-driver=qemu2 : (40.397879375s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3630460287 -p stopped-upgrade-338000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3630460287 -p stopped-upgrade-338000 stop: (3.09329125s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-338000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0729 04:18:20.034261    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 04:20:14.728951    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-338000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.427397167s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-338000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-338000" primary control-plane node in "stopped-upgrade-338000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-338000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:16:58.640090    4028 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:16:58.640245    4028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:58.640249    4028 out.go:304] Setting ErrFile to fd 2...
	I0729 04:16:58.640251    4028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:58.640419    4028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:16:58.641492    4028 out.go:298] Setting JSON to false
	I0729 04:16:58.659453    4028 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2781,"bootTime":1722249037,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:16:58.659534    4028 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:16:58.675134    4028 out.go:177] * [stopped-upgrade-338000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:16:58.683123    4028 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:16:58.683145    4028 notify.go:220] Checking for updates...
	I0729 04:16:58.691080    4028 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:16:58.694093    4028 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:16:58.697135    4028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:16:58.700077    4028 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:16:58.703082    4028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:16:58.706425    4028 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:16:58.710036    4028 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 04:16:58.713078    4028 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:16:58.717111    4028 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:16:58.724095    4028 start.go:297] selected driver: qemu2
	I0729 04:16:58.724102    4028 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:16:58.724170    4028 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:16:58.727046    4028 cni.go:84] Creating CNI manager for ""
	I0729 04:16:58.727070    4028 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:16:58.727095    4028 start.go:340] cluster config:
	{Name:stopped-upgrade-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:16:58.727153    4028 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:16:58.735097    4028 out.go:177] * Starting "stopped-upgrade-338000" primary control-plane node in "stopped-upgrade-338000" cluster
	I0729 04:16:58.739101    4028 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:16:58.739119    4028 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 04:16:58.739127    4028 cache.go:56] Caching tarball of preloaded images
	I0729 04:16:58.739197    4028 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:16:58.739208    4028 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 04:16:58.739262    4028 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/config.json ...
	I0729 04:16:58.739682    4028 start.go:360] acquireMachinesLock for stopped-upgrade-338000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:16:58.739712    4028 start.go:364] duration metric: took 23.208µs to acquireMachinesLock for "stopped-upgrade-338000"
	I0729 04:16:58.739722    4028 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:16:58.739727    4028 fix.go:54] fixHost starting: 
	I0729 04:16:58.739836    4028 fix.go:112] recreateIfNeeded on stopped-upgrade-338000: state=Stopped err=<nil>
	W0729 04:16:58.739844    4028 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:16:58.748073    4028 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-338000" ...
	I0729 04:16:58.752102    4028 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:16:58.752171    4028 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50482-:22,hostfwd=tcp::50483-:2376,hostname=stopped-upgrade-338000 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/disk.qcow2
	I0729 04:16:58.798497    4028 main.go:141] libmachine: STDOUT: 
	I0729 04:16:58.798530    4028 main.go:141] libmachine: STDERR: 
	I0729 04:16:58.798535    4028 main.go:141] libmachine: Waiting for VM to start (ssh -p 50482 docker@127.0.0.1)...
	I0729 04:17:17.803140    4028 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/config.json ...
	I0729 04:17:17.803873    4028 machine.go:94] provisionDockerMachine start ...
	I0729 04:17:17.804034    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:17.804534    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:17.804548    4028 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 04:17:17.886669    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 04:17:17.886700    4028 buildroot.go:166] provisioning hostname "stopped-upgrade-338000"
	I0729 04:17:17.886793    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:17.886990    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:17.887000    4028 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-338000 && echo "stopped-upgrade-338000" | sudo tee /etc/hostname
	I0729 04:17:17.948593    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-338000
	
	I0729 04:17:17.948656    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:17.948798    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:17.948807    4028 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-338000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-338000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-338000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 04:17:18.008233    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:17:18.008245    4028 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19336-945/.minikube CaCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19336-945/.minikube}
	I0729 04:17:18.008258    4028 buildroot.go:174] setting up certificates
	I0729 04:17:18.008263    4028 provision.go:84] configureAuth start
	I0729 04:17:18.008270    4028 provision.go:143] copyHostCerts
	I0729 04:17:18.008341    4028 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem, removing ...
	I0729 04:17:18.008346    4028 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem
	I0729 04:17:18.008470    4028 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/ca.pem (1078 bytes)
	I0729 04:17:18.008665    4028 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem, removing ...
	I0729 04:17:18.008669    4028 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem
	I0729 04:17:18.008724    4028 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/cert.pem (1123 bytes)
	I0729 04:17:18.008832    4028 exec_runner.go:144] found /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem, removing ...
	I0729 04:17:18.008835    4028 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem
	I0729 04:17:18.008887    4028 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19336-945/.minikube/key.pem (1679 bytes)
	I0729 04:17:18.008978    4028 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-338000 san=[127.0.0.1 localhost minikube stopped-upgrade-338000]
	I0729 04:17:18.257534    4028 provision.go:177] copyRemoteCerts
	I0729 04:17:18.257589    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 04:17:18.257599    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	I0729 04:17:18.290213    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 04:17:18.297305    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 04:17:18.304071    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 04:17:18.310744    4028 provision.go:87] duration metric: took 302.485833ms to configureAuth
	I0729 04:17:18.310752    4028 buildroot.go:189] setting minikube options for container-runtime
	I0729 04:17:18.310857    4028 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:17:18.310895    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:18.310992    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:18.310999    4028 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 04:17:18.366171    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 04:17:18.366180    4028 buildroot.go:70] root file system type: tmpfs
	I0729 04:17:18.366237    4028 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 04:17:18.366302    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:18.366424    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:18.366456    4028 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 04:17:18.426474    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 04:17:18.426523    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:18.426628    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:18.426638    4028 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 04:17:18.758160    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 04:17:18.758173    4028 machine.go:97] duration metric: took 954.321458ms to provisionDockerMachine
	I0729 04:17:18.758181    4028 start.go:293] postStartSetup for "stopped-upgrade-338000" (driver="qemu2")
	I0729 04:17:18.758187    4028 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 04:17:18.758246    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 04:17:18.758254    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	I0729 04:17:18.789263    4028 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 04:17:18.790553    4028 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 04:17:18.790560    4028 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19336-945/.minikube/addons for local assets ...
	I0729 04:17:18.790663    4028 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19336-945/.minikube/files for local assets ...
	I0729 04:17:18.790785    4028 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem -> 13972.pem in /etc/ssl/certs
	I0729 04:17:18.790910    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 04:17:18.793866    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem --> /etc/ssl/certs/13972.pem (1708 bytes)
	I0729 04:17:18.800919    4028 start.go:296] duration metric: took 42.734ms for postStartSetup
	I0729 04:17:18.800933    4028 fix.go:56] duration metric: took 20.061856458s for fixHost
	I0729 04:17:18.800968    4028 main.go:141] libmachine: Using SSH client type: native
	I0729 04:17:18.801073    4028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103026a10] 0x103029270 <nil>  [] 0s} localhost 50482 <nil> <nil>}
	I0729 04:17:18.801078    4028 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 04:17:18.856109    4028 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722251839.313678712
	
	I0729 04:17:18.856118    4028 fix.go:216] guest clock: 1722251839.313678712
	I0729 04:17:18.856122    4028 fix.go:229] Guest: 2024-07-29 04:17:19.313678712 -0700 PDT Remote: 2024-07-29 04:17:18.800935 -0700 PDT m=+20.184814709 (delta=512.743712ms)
	I0729 04:17:18.856134    4028 fix.go:200] guest clock delta is within tolerance: 512.743712ms
	I0729 04:17:18.856138    4028 start.go:83] releasing machines lock for "stopped-upgrade-338000", held for 20.117072291s
	I0729 04:17:18.856200    4028 ssh_runner.go:195] Run: cat /version.json
	I0729 04:17:18.856210    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	I0729 04:17:18.856200    4028 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 04:17:18.856256    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	W0729 04:17:18.856780    4028 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50482: connect: connection refused
	I0729 04:17:18.856801    4028 retry.go:31] will retry after 211.079939ms: dial tcp [::1]:50482: connect: connection refused
	W0729 04:17:18.884000    4028 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 04:17:18.884049    4028 ssh_runner.go:195] Run: systemctl --version
	I0729 04:17:18.885706    4028 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 04:17:18.887349    4028 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 04:17:18.887379    4028 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 04:17:18.890454    4028 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 04:17:18.895162    4028 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 04:17:18.895171    4028 start.go:495] detecting cgroup driver to use...
	I0729 04:17:18.895246    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:17:18.901570    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 04:17:18.904897    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 04:17:18.907573    4028 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 04:17:18.907602    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 04:17:18.910532    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:17:18.913734    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 04:17:18.916762    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:17:18.919428    4028 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 04:17:18.922628    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 04:17:18.925930    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 04:17:18.929089    4028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 04:17:18.932043    4028 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 04:17:18.934667    4028 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 04:17:18.937678    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:19.002866    4028 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 04:17:19.013307    4028 start.go:495] detecting cgroup driver to use...
	I0729 04:17:19.013378    4028 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 04:17:19.018835    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:17:19.022990    4028 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 04:17:19.029441    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:17:19.034407    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:17:19.038784    4028 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 04:17:19.080765    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:17:19.085163    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:17:19.090808    4028 ssh_runner.go:195] Run: which cri-dockerd
	I0729 04:17:19.092306    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 04:17:19.095461    4028 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 04:17:19.102516    4028 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 04:17:19.169844    4028 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 04:17:19.234381    4028 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 04:17:19.234442    4028 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 04:17:19.239844    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:19.306449    4028 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:17:20.417141    4028 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.110693542s)
	I0729 04:17:20.417249    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 04:17:20.423439    4028 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 04:17:20.431127    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:17:20.437181    4028 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 04:17:20.500319    4028 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 04:17:20.571492    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:20.633609    4028 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 04:17:20.639565    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:17:20.644223    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:20.710606    4028 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 04:17:20.750588    4028 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 04:17:20.750672    4028 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 04:17:20.753673    4028 start.go:563] Will wait 60s for crictl version
	I0729 04:17:20.753724    4028 ssh_runner.go:195] Run: which crictl
	I0729 04:17:20.755267    4028 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 04:17:20.769391    4028 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 04:17:20.769459    4028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:17:20.785101    4028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:17:20.805029    4028 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 04:17:20.805092    4028 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 04:17:20.806509    4028 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 04:17:20.810474    4028 kubeadm.go:883] updating cluster {Name:stopped-upgrade-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 04:17:20.810516    4028 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:17:20.810556    4028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:17:20.821027    4028 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:17:20.821036    4028 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:17:20.821085    4028 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:17:20.824149    4028 ssh_runner.go:195] Run: which lz4
	I0729 04:17:20.825443    4028 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 04:17:20.826873    4028 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 04:17:20.826883    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 04:17:21.750752    4028 docker.go:649] duration metric: took 925.365833ms to copy over tarball
	I0729 04:17:21.750813    4028 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 04:17:22.913166    4028 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162377916s)
	I0729 04:17:22.913180    4028 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 04:17:22.928614    4028 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:17:22.931398    4028 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 04:17:22.936423    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:22.992881    4028 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:17:24.568753    4028 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.575903791s)
	I0729 04:17:24.568839    4028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:17:24.580765    4028 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:17:24.580773    4028 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:17:24.580780    4028 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 04:17:24.585430    4028 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:17:24.586909    4028 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:17:24.588387    4028 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:17:24.588473    4028 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:17:24.590213    4028 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:17:24.590703    4028 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:17:24.591092    4028 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:17:24.592481    4028 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:17:24.592775    4028 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:17:24.593441    4028 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:17:24.593448    4028 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:17:24.594000    4028 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:17:24.594600    4028 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:17:24.595830    4028 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:17:24.595885    4028 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:17:24.596505    4028 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:17:25.022437    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:17:25.024525    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 04:17:25.033964    4028 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 04:17:25.033996    4028 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:17:25.034050    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:17:25.037657    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:17:25.039759    4028 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 04:17:25.039777    4028 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 04:17:25.039811    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 04:17:25.049789    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 04:17:25.054473    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:17:25.057746    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 04:17:25.057823    4028 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 04:17:25.057842    4028 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:17:25.057860    4028 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 04:17:25.057872    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:17:25.073444    4028 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 04:17:25.073466    4028 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:17:25.073483    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 04:17:25.073517    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:17:25.073552    4028 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 04:17:25.073560    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 04:17:25.077904    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0729 04:17:25.084744    4028 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:17:25.084868    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:17:25.090808    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 04:17:25.090839    4028 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 04:17:25.090855    4028 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:17:25.090897    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:17:25.095646    4028 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 04:17:25.095659    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 04:17:25.109893    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 04:17:25.109989    4028 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 04:17:25.110006    4028 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:17:25.110053    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:17:25.132096    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 04:17:25.139255    4028 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 04:17:25.139286    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:17:25.139394    4028 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:17:25.151174    4028 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 04:17:25.151194    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 04:17:25.151199    4028 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 04:17:25.151215    4028 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:17:25.151255    4028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 04:17:25.183630    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:17:25.203159    4028 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:17:25.203171    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0729 04:17:25.224075    4028 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:17:25.224195    4028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:17:25.245080    4028 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 04:17:25.245107    4028 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 04:17:25.245125    4028 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:17:25.245177    4028 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:17:25.258469    4028 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 04:17:25.258584    4028 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:17:25.259929    4028 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 04:17:25.259940    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 04:17:25.288308    4028 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:17:25.288322    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 04:17:25.523371    4028 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 04:17:25.523414    4028 cache_images.go:92] duration metric: took 942.656958ms to LoadCachedImages
	W0729 04:17:25.523460    4028 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0729 04:17:25.523469    4028 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 04:17:25.523524    4028 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-338000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 04:17:25.523590    4028 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 04:17:25.537320    4028 cni.go:84] Creating CNI manager for ""
	I0729 04:17:25.537331    4028 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:17:25.537335    4028 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 04:17:25.537344    4028 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-338000 NodeName:stopped-upgrade-338000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 04:17:25.537412    4028 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-338000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 04:17:25.537465    4028 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 04:17:25.540927    4028 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 04:17:25.540957    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 04:17:25.543708    4028 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 04:17:25.548469    4028 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 04:17:25.553282    4028 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 04:17:25.558713    4028 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 04:17:25.560110    4028 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 04:17:25.563419    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:17:25.629515    4028 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:17:25.636338    4028 certs.go:68] Setting up /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000 for IP: 10.0.2.15
	I0729 04:17:25.636348    4028 certs.go:194] generating shared ca certs ...
	I0729 04:17:25.636358    4028 certs.go:226] acquiring lock for ca certs: {Name:mk0965f831896eb9b1f5dee9ac66a2ece4b593d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:17:25.636533    4028 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19336-945/.minikube/ca.key
	I0729 04:17:25.636596    4028 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.key
	I0729 04:17:25.636605    4028 certs.go:256] generating profile certs ...
	I0729 04:17:25.636695    4028 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/client.key
	I0729 04:17:25.636716    4028 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key.a7dbec32
	I0729 04:17:25.636726    4028 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt.a7dbec32 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 04:17:25.707181    4028 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt.a7dbec32 ...
	I0729 04:17:25.707192    4028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt.a7dbec32: {Name:mk4f7c46013d8982827f9dd2e084af8713094999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:17:25.707490    4028 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key.a7dbec32 ...
	I0729 04:17:25.707495    4028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key.a7dbec32: {Name:mkd146571ce421c6254955e0f574c7716ca821fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:17:25.707640    4028 certs.go:381] copying /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt.a7dbec32 -> /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt
	I0729 04:17:25.707795    4028 certs.go:385] copying /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key.a7dbec32 -> /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key
	I0729 04:17:25.707952    4028 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/proxy-client.key
	I0729 04:17:25.708085    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397.pem (1338 bytes)
	W0729 04:17:25.708113    4028 certs.go:480] ignoring /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397_empty.pem, impossibly tiny 0 bytes
	I0729 04:17:25.708118    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 04:17:25.708141    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem (1078 bytes)
	I0729 04:17:25.708159    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem (1123 bytes)
	I0729 04:17:25.708178    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/certs/key.pem (1679 bytes)
	I0729 04:17:25.708218    4028 certs.go:484] found cert: /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem (1708 bytes)
	I0729 04:17:25.708570    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 04:17:25.715265    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 04:17:25.722080    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 04:17:25.729405    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 04:17:25.736813    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 04:17:25.743666    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 04:17:25.750527    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 04:17:25.758092    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 04:17:25.765751    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/certs/1397.pem --> /usr/share/ca-certificates/1397.pem (1338 bytes)
	I0729 04:17:25.772820    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/ssl/certs/13972.pem --> /usr/share/ca-certificates/13972.pem (1708 bytes)
	I0729 04:17:25.779560    4028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 04:17:25.786165    4028 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 04:17:25.791595    4028 ssh_runner.go:195] Run: openssl version
	I0729 04:17:25.793372    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1397.pem && ln -fs /usr/share/ca-certificates/1397.pem /etc/ssl/certs/1397.pem"
	I0729 04:17:25.796330    4028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1397.pem
	I0729 04:17:25.797666    4028 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:42 /usr/share/ca-certificates/1397.pem
	I0729 04:17:25.797686    4028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1397.pem
	I0729 04:17:25.799502    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1397.pem /etc/ssl/certs/51391683.0"
	I0729 04:17:25.802397    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13972.pem && ln -fs /usr/share/ca-certificates/13972.pem /etc/ssl/certs/13972.pem"
	I0729 04:17:25.805804    4028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13972.pem
	I0729 04:17:25.807292    4028 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:42 /usr/share/ca-certificates/13972.pem
	I0729 04:17:25.807309    4028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13972.pem
	I0729 04:17:25.809066    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13972.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 04:17:25.811870    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 04:17:25.814668    4028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:17:25.816165    4028 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:35 /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:17:25.816185    4028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:17:25.817945    4028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 04:17:25.821368    4028 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 04:17:25.822794    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 04:17:25.824840    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 04:17:25.826628    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 04:17:25.828545    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 04:17:25.830297    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 04:17:25.832073    4028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 04:17:25.833899    4028 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-338000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50517 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-338000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:17:25.833964    4028 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:17:25.844062    4028 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 04:17:25.847011    4028 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 04:17:25.847017    4028 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 04:17:25.847038    4028 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 04:17:25.850774    4028 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:17:25.851088    4028 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-338000" does not appear in /Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:17:25.851183    4028 kubeconfig.go:62] /Users/jenkins/minikube-integration/19336-945/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-338000" cluster setting kubeconfig missing "stopped-upgrade-338000" context setting]
	I0729 04:17:25.851371    4028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/kubeconfig: {Name:mkc1463454d977493e341af62af023d087f8e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:17:25.851845    4028 kapi.go:59] client config for stopped-upgrade-338000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/client.key", CAFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043bc080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:17:25.852164    4028 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 04:17:25.854930    4028 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-338000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 04:17:25.854935    4028 kubeadm.go:1160] stopping kube-system containers ...
	I0729 04:17:25.854975    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:17:25.865473    4028 docker.go:483] Stopping containers: [cae11772d89d 4830b62c6b98 486a2b7332b3 8f2228fa6055 68f8e4539bd1 64317cceabde b56e17165644 285d228d3e90]
	I0729 04:17:25.865532    4028 ssh_runner.go:195] Run: docker stop cae11772d89d 4830b62c6b98 486a2b7332b3 8f2228fa6055 68f8e4539bd1 64317cceabde b56e17165644 285d228d3e90
	I0729 04:17:25.876013    4028 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 04:17:25.881515    4028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:17:25.884626    4028 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:17:25.884632    4028 kubeadm.go:157] found existing configuration files:
	
	I0729 04:17:25.884659    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0729 04:17:25.887336    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:17:25.887360    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:17:25.890194    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0729 04:17:25.893293    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:17:25.893318    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:17:25.896034    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0729 04:17:25.898528    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:17:25.898546    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:17:25.901684    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0729 04:17:25.904375    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:17:25.904398    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:17:25.906831    4028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:17:25.909967    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:17:25.933080    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:17:26.455118    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:17:26.574816    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:17:26.604288    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:17:26.630476    4028 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:17:26.630555    4028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:17:27.132371    4028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:17:27.632581    4028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:17:27.638009    4028 api_server.go:72] duration metric: took 1.007566584s to wait for apiserver process to appear ...
	I0729 04:17:27.638021    4028 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:17:27.638031    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:32.638076    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:32.638107    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:37.639875    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:37.639946    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:42.640417    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:42.640444    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:47.640719    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:47.640749    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:52.641213    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:52.641267    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:17:57.641802    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:17:57.641828    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:02.642597    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:02.642620    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:07.643555    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:07.643589    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:12.644809    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:12.644836    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:17.646432    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:17.646467    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:22.648460    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:22.648498    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:27.650586    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:27.650690    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:18:27.662665    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:18:27.662745    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:18:27.678224    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:18:27.678301    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:18:27.688916    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:18:27.688988    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:18:27.699179    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:18:27.699254    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:18:27.709416    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:18:27.709493    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:18:27.719511    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:18:27.719598    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:18:27.730085    4028 logs.go:276] 0 containers: []
	W0729 04:18:27.730096    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:18:27.730154    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:18:27.740310    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:18:27.740326    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:18:27.740332    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:18:27.753761    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:18:27.753772    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:18:27.768251    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:18:27.768261    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:18:27.782790    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:18:27.782803    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:18:27.794978    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:18:27.794989    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:18:27.834987    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:18:27.834997    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:18:27.928126    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:18:27.928153    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:18:27.942950    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:18:27.942962    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:18:27.958420    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:18:27.958438    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:18:27.973221    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:18:27.973230    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:18:27.997332    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:18:27.997342    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:18:28.008649    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:18:28.008672    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:18:28.023817    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:18:28.023827    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:18:28.035608    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:18:28.035618    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:18:28.051220    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:18:28.051231    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:18:28.055353    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:18:28.055359    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:18:28.081805    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:18:28.081817    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:18:30.601310    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:35.603506    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:35.603662    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:18:35.620336    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:18:35.620414    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:18:35.633165    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:18:35.633240    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:18:35.643923    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:18:35.643989    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:18:35.655738    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:18:35.655819    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:18:35.666104    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:18:35.666172    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:18:35.680276    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:18:35.680341    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:18:35.690801    4028 logs.go:276] 0 containers: []
	W0729 04:18:35.690813    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:18:35.690869    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:18:35.700887    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:18:35.700903    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:18:35.700908    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:18:35.705439    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:18:35.705448    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:18:35.719226    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:18:35.719237    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:18:35.731375    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:18:35.731385    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:18:35.743440    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:18:35.743456    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:18:35.782400    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:18:35.782411    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:18:35.796966    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:18:35.796979    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:18:35.811891    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:18:35.811901    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:18:35.823534    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:18:35.823548    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:18:35.834985    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:18:35.834999    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:18:35.860634    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:18:35.860647    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:18:35.874529    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:18:35.874540    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:18:35.888669    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:18:35.888678    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:18:35.900021    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:18:35.900032    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:18:35.937545    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:18:35.937560    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:18:35.950013    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:18:35.950026    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:18:35.968417    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:18:35.968431    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:18:38.495719    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:43.497087    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:43.497225    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:18:43.514917    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:18:43.515005    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:18:43.526119    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:18:43.526195    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:18:43.540144    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:18:43.540213    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:18:43.550537    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:18:43.550606    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:18:43.561293    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:18:43.561351    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:18:43.575137    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:18:43.575217    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:18:43.585051    4028 logs.go:276] 0 containers: []
	W0729 04:18:43.585064    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:18:43.585115    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:18:43.595751    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:18:43.595769    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:18:43.595774    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:18:43.612605    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:18:43.612618    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:18:43.624067    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:18:43.624080    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:18:43.635354    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:18:43.635370    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:18:43.652959    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:18:43.652971    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:18:43.668643    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:18:43.668657    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:18:43.681456    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:18:43.681467    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:18:43.712712    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:18:43.712723    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:18:43.727283    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:18:43.727299    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:18:43.742489    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:18:43.742501    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:18:43.756194    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:18:43.756205    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:18:43.780120    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:18:43.780127    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:18:43.817370    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:18:43.817377    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:18:43.821518    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:18:43.821526    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:18:43.857268    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:18:43.857284    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:18:43.871565    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:18:43.871575    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:18:43.883691    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:18:43.883702    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:18:46.397652    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:51.400252    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:51.400652    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:18:51.436265    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:18:51.436379    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:18:51.454849    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:18:51.454924    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:18:51.468361    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:18:51.468438    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:18:51.479515    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:18:51.479588    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:18:51.490213    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:18:51.490287    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:18:51.500782    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:18:51.500856    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:18:51.510608    4028 logs.go:276] 0 containers: []
	W0729 04:18:51.510619    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:18:51.510680    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:18:51.524019    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:18:51.524037    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:18:51.524042    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:18:51.535635    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:18:51.535646    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:18:51.553008    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:18:51.553021    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:18:51.564330    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:18:51.564340    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:18:51.606769    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:18:51.606791    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:18:51.611796    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:18:51.611805    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:18:51.649887    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:18:51.649899    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:18:51.662058    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:18:51.662071    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:18:51.673948    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:18:51.673960    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:18:51.698332    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:18:51.698342    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:18:51.711824    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:18:51.711837    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:18:51.723181    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:18:51.723192    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:18:51.736442    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:18:51.736455    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:18:51.750374    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:18:51.750386    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:18:51.765266    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:18:51.765277    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:18:51.782110    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:18:51.782120    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:18:51.795150    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:18:51.795160    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:18:54.321627    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:18:59.322985    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:18:59.323354    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:18:59.354026    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:18:59.354158    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:18:59.372752    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:18:59.372836    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:18:59.386532    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:18:59.386604    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:18:59.398241    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:18:59.398307    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:18:59.409113    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:18:59.409178    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:18:59.420019    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:18:59.420089    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:18:59.430992    4028 logs.go:276] 0 containers: []
	W0729 04:18:59.431003    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:18:59.431060    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:18:59.441793    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:18:59.441808    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:18:59.441814    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:18:59.459856    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:18:59.459871    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:18:59.484757    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:18:59.484767    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:18:59.496695    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:18:59.496709    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:18:59.514001    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:18:59.514016    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:18:59.539628    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:18:59.539636    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:18:59.543876    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:18:59.543883    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:18:59.554938    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:18:59.554950    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:18:59.569721    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:18:59.569735    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:18:59.581142    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:18:59.581151    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:18:59.620482    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:18:59.620501    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:18:59.634622    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:18:59.634635    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:18:59.649263    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:18:59.649277    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:18:59.664723    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:18:59.664735    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:18:59.703142    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:18:59.703154    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:18:59.718000    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:18:59.718014    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:18:59.734329    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:18:59.734343    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:02.249037    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:07.251232    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:07.251383    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:07.265215    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:07.265296    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:07.276996    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:07.277068    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:07.287918    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:07.287989    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:07.298682    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:07.298750    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:07.309174    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:07.309253    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:07.319637    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:07.319720    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:07.330319    4028 logs.go:276] 0 containers: []
	W0729 04:19:07.330328    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:07.330382    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:07.341001    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:07.341018    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:07.341024    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:07.378342    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:07.378352    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:07.392525    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:07.392536    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:07.408018    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:07.408027    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:07.420241    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:07.420251    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:07.432962    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:07.432974    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:07.472256    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:07.472266    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:07.476417    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:07.476423    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:07.490681    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:07.490692    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:07.502719    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:07.502730    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:07.516881    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:07.516892    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:07.533947    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:07.533957    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:07.547722    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:07.547732    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:07.562180    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:07.562188    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:07.585952    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:07.585961    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:07.611483    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:07.611495    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:07.623701    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:07.623717    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:10.136900    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:15.137505    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:15.137615    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:15.150056    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:15.150120    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:15.160991    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:15.161054    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:15.172723    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:15.172792    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:15.183832    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:15.183898    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:15.194650    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:15.194722    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:15.209505    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:15.209576    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:15.220148    4028 logs.go:276] 0 containers: []
	W0729 04:19:15.220160    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:15.220223    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:15.231011    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:15.231028    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:15.231034    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:15.246652    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:15.246661    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:15.261995    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:15.262006    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:15.274522    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:15.274531    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:15.300117    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:15.300127    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:15.339196    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:15.339208    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:15.378306    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:15.378316    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:15.403811    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:15.403822    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:15.420430    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:15.420442    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:15.433293    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:15.433304    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:15.453319    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:15.453330    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:15.466431    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:15.466441    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:15.483491    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:15.483500    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:15.508365    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:15.508376    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:15.520484    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:15.520501    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:15.524838    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:15.524845    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:15.541357    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:15.541368    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:18.055184    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:23.057428    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:23.057559    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:23.069106    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:23.069175    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:23.080174    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:23.080249    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:23.090927    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:23.091000    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:23.101952    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:23.102022    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:23.118046    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:23.118111    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:23.133934    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:23.134009    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:23.145308    4028 logs.go:276] 0 containers: []
	W0729 04:19:23.145318    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:23.145393    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:23.156286    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:23.156304    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:23.156310    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:23.175570    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:23.175586    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:23.191654    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:23.191668    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:23.203393    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:23.203405    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:23.215770    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:23.215784    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:23.251389    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:23.251400    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:23.267714    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:23.267724    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:23.282779    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:23.282790    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:23.294728    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:23.294739    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:23.318430    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:23.318438    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:23.355948    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:23.355959    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:23.368333    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:23.368349    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:23.384549    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:23.384562    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:23.402202    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:23.402213    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:23.416482    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:23.416493    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:23.441467    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:23.441479    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:23.453830    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:23.453842    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:25.960028    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:30.962109    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:30.962259    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:30.984865    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:30.984937    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:31.004615    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:31.004695    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:31.019157    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:31.019227    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:31.030000    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:31.030071    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:31.040506    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:31.040571    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:31.056492    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:31.056559    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:31.066406    4028 logs.go:276] 0 containers: []
	W0729 04:19:31.066417    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:31.066475    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:31.076666    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:31.076684    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:31.076689    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:31.090708    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:31.090720    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:31.105210    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:31.105223    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:31.122993    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:31.123003    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:31.147896    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:31.147903    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:31.186502    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:31.186513    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:31.198431    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:31.198444    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:31.210211    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:31.210222    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:31.224695    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:31.224708    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:31.249056    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:31.249068    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:31.260946    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:31.260960    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:31.265157    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:31.265165    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:31.276458    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:31.276471    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:31.290904    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:31.290915    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:31.305393    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:31.305406    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:31.317336    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:31.317349    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:31.328178    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:31.328189    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:33.872754    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:38.875235    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:38.875408    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:38.894526    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:38.894626    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:38.908449    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:38.908522    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:38.920445    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:38.920514    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:38.930985    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:38.931052    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:38.946007    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:38.946081    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:38.957138    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:38.957207    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:38.970038    4028 logs.go:276] 0 containers: []
	W0729 04:19:38.970049    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:38.970109    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:38.980417    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:38.980437    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:38.980442    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:38.998226    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:38.998236    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:39.009633    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:39.009645    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:39.024005    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:39.024018    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:39.069484    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:39.069494    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:39.093726    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:39.093739    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:39.108185    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:39.108196    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:39.119580    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:39.119610    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:39.135146    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:39.135157    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:39.150398    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:39.150408    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:39.162237    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:39.162247    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:39.166878    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:39.166884    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:39.178731    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:39.178743    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:39.190162    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:39.190171    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:39.214921    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:39.214931    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:39.232827    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:39.232837    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:39.244091    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:39.244108    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:41.782887    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:46.785134    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:46.785304    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:46.799823    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:46.799900    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:46.813727    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:46.813794    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:46.824275    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:46.824333    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:46.834447    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:46.834517    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:46.844770    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:46.844839    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:46.855427    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:46.855489    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:46.872655    4028 logs.go:276] 0 containers: []
	W0729 04:19:46.872670    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:46.872730    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:46.884461    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:46.884479    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:46.884485    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:46.888805    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:46.888810    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:46.903067    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:46.903080    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:46.917937    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:46.917952    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:46.958539    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:46.958548    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:46.969959    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:46.969972    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:46.982910    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:46.982920    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:46.995847    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:46.995859    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:47.020556    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:47.020563    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:47.033171    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:47.033182    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:47.057091    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:47.057101    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:47.072512    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:47.072526    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:47.089805    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:47.089816    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:47.101715    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:47.101729    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:47.112457    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:47.112469    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:47.126267    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:47.126277    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:47.141160    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:47.141171    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:49.677456    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:19:54.679556    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:19:54.679680    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:19:54.694128    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:19:54.694204    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:19:54.704505    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:19:54.704573    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:19:54.715041    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:19:54.715103    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:19:54.725636    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:19:54.725700    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:19:54.736352    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:19:54.736422    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:19:54.751568    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:19:54.751639    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:19:54.761408    4028 logs.go:276] 0 containers: []
	W0729 04:19:54.761419    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:19:54.761478    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:19:54.771502    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:19:54.771521    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:19:54.771527    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:19:54.808160    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:19:54.808174    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:19:54.827452    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:19:54.827462    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:19:54.839291    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:19:54.839303    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:19:54.855278    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:19:54.855288    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:19:54.871969    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:19:54.871981    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:19:54.910256    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:19:54.910265    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:19:54.929456    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:19:54.929468    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:19:54.944911    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:19:54.944924    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:19:54.962257    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:19:54.962268    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:19:54.973701    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:19:54.973713    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:19:54.999205    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:19:54.999214    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:19:55.003269    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:19:55.003277    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:19:55.017602    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:19:55.017613    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:19:55.042928    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:19:55.042939    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:19:55.059262    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:19:55.059277    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:19:55.074720    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:19:55.074732    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:19:57.588300    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:02.590389    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:02.590584    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:02.613254    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:02.613361    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:02.629273    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:02.629359    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:02.644457    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:02.644532    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:02.656196    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:02.656272    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:02.666671    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:02.666737    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:02.677547    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:02.677619    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:02.687575    4028 logs.go:276] 0 containers: []
	W0729 04:20:02.687588    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:02.687654    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:02.701060    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:02.701079    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:02.701085    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:02.738668    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:02.738679    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:02.751435    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:02.751447    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:02.766172    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:02.766185    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:02.777712    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:02.777725    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:02.792837    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:02.792849    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:02.807574    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:02.807586    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:02.819297    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:02.819312    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:02.843666    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:02.843673    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:02.882886    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:02.882899    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:02.887417    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:02.887424    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:02.901097    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:02.901110    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:02.926657    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:02.926670    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:02.940980    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:02.940991    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:02.954813    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:02.954823    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:02.972164    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:02.972175    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:02.986602    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:02.986612    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:05.500131    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:10.502415    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:10.502596    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:10.520192    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:10.520286    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:10.533716    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:10.533804    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:10.545304    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:10.545380    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:10.556189    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:10.556278    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:10.566676    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:10.566748    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:10.577440    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:10.577547    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:10.588131    4028 logs.go:276] 0 containers: []
	W0729 04:20:10.588142    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:10.588206    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:10.598315    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:10.598335    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:10.598340    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:10.623823    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:10.623836    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:10.637448    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:10.637462    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:10.654712    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:10.654723    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:10.694331    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:10.694339    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:10.708829    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:10.708840    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:10.720554    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:10.720564    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:10.745446    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:10.745456    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:10.756849    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:10.756859    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:10.768112    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:10.768123    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:10.792687    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:10.792701    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:10.805039    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:10.805049    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:10.809469    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:10.809474    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:10.844816    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:10.844827    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:10.859085    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:10.859095    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:10.870697    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:10.870710    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:10.893823    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:10.893834    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:13.411965    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:18.414604    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:18.414852    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:18.446056    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:18.446162    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:18.463064    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:18.463147    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:18.476048    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:18.476123    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:18.487570    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:18.487641    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:18.497834    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:18.497897    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:18.508524    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:18.508595    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:18.518124    4028 logs.go:276] 0 containers: []
	W0729 04:20:18.518138    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:18.518191    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:18.528634    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:18.528652    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:18.528657    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:18.540383    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:18.540397    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:18.558323    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:18.558334    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:18.581991    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:18.582000    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:18.621365    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:18.621374    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:18.625629    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:18.625636    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:18.639878    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:18.639891    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:18.655323    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:18.655333    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:18.667799    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:18.667812    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:18.682127    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:18.682137    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:18.693996    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:18.694011    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:18.711868    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:18.711879    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:18.747759    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:18.747771    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:18.773296    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:18.773308    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:18.787084    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:18.787095    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:18.798874    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:18.798889    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:18.813225    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:18.813235    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:21.327371    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:26.329490    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:26.329621    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:26.356093    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:26.356173    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:26.369797    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:26.369868    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:26.380622    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:26.380692    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:26.391343    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:26.391422    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:26.402205    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:26.402273    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:26.414399    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:26.414474    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:26.424684    4028 logs.go:276] 0 containers: []
	W0729 04:20:26.424695    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:26.424756    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:26.435681    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:26.435698    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:26.435703    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:26.473156    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:26.473168    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:26.508021    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:26.508034    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:26.532882    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:26.532894    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:26.547183    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:26.547196    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:26.561960    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:26.561972    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:26.577148    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:26.577161    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:26.588673    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:26.588684    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:26.600197    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:26.600210    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:26.620162    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:26.620173    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:26.643467    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:26.643478    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:26.654874    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:26.654885    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:26.659177    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:26.659184    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:26.673127    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:26.673137    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:26.684493    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:26.684506    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:26.696006    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:26.696016    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:26.720775    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:26.720790    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:29.235248    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:34.237989    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:34.238471    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:34.280544    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:34.280678    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:34.305803    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:34.305887    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:34.319736    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:34.319811    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:34.331006    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:34.331075    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:34.341767    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:34.341838    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:34.352865    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:34.352936    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:34.363655    4028 logs.go:276] 0 containers: []
	W0729 04:20:34.363667    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:34.363732    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:34.377457    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:34.377475    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:34.377481    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:34.388933    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:34.388946    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:34.412147    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:34.412160    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:34.424838    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:34.424851    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:34.464305    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:34.464314    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:34.482712    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:34.482723    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:34.497125    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:34.497135    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:34.508941    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:34.508951    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:34.526610    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:34.526620    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:34.538629    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:34.538640    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:34.542773    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:34.542783    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:34.558200    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:34.558210    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:34.570422    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:34.570434    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:34.585800    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:34.585810    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:34.599308    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:34.599324    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:34.638072    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:34.638088    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:34.663975    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:34.663986    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:37.181305    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:42.183645    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:42.183973    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:42.217900    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:42.218033    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:42.238073    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:42.238166    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:42.252074    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:42.252151    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:42.268097    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:42.268170    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:42.279132    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:42.279201    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:42.290243    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:42.290313    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:42.301113    4028 logs.go:276] 0 containers: []
	W0729 04:20:42.301124    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:42.301184    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:42.312265    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:42.312284    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:42.312290    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:42.352607    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:42.352619    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:42.357158    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:42.357165    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:42.392031    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:42.392043    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:42.403703    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:42.403716    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:42.419793    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:42.419805    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:42.445366    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:42.445381    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:42.464777    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:42.464787    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:42.476509    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:42.476520    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:42.487754    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:42.487768    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:42.499715    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:42.499731    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:42.518891    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:42.518902    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:42.534598    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:42.534613    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:42.546492    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:42.546502    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:42.564929    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:42.564942    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:42.578698    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:42.578710    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:42.590304    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:42.590319    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:45.115113    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:50.117391    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:50.117668    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:50.147381    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:50.147516    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:50.166013    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:50.166112    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:50.180591    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:50.180666    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:50.194967    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:50.195047    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:50.205530    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:50.205601    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:50.216334    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:50.216402    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:50.231073    4028 logs.go:276] 0 containers: []
	W0729 04:20:50.231084    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:50.231143    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:50.241623    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:50.241642    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:50.241648    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:50.256350    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:50.256360    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:50.277294    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:50.277305    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:50.300348    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:50.300361    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:50.324141    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:50.324152    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:20:50.336221    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:50.336232    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:50.377496    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:50.377508    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:50.381943    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:50.381954    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:50.416818    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:50.416832    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:50.431974    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:50.431987    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:50.443910    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:50.443920    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:50.458736    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:50.458747    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:50.483931    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:50.483942    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:50.502364    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:50.502374    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:50.530124    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:50.530141    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:50.559537    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:50.559550    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:50.571528    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:50.571543    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:53.085054    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:20:58.087189    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:20:58.087322    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:20:58.104564    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:20:58.104651    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:20:58.117504    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:20:58.117581    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:20:58.128767    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:20:58.128838    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:20:58.139677    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:20:58.139748    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:20:58.149933    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:20:58.150004    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:20:58.160883    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:20:58.160956    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:20:58.171063    4028 logs.go:276] 0 containers: []
	W0729 04:20:58.171073    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:20:58.171130    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:20:58.181346    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:20:58.181363    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:20:58.181368    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:20:58.192574    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:20:58.192586    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:20:58.230402    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:20:58.230410    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:20:58.245020    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:20:58.245034    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:20:58.260261    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:20:58.260274    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:20:58.277662    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:20:58.277675    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:20:58.292236    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:20:58.292248    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:20:58.305734    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:20:58.305747    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:20:58.317530    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:20:58.317540    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:20:58.329135    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:20:58.329146    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:20:58.340600    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:20:58.340610    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:20:58.362960    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:20:58.362971    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:20:58.367165    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:20:58.367171    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:20:58.403011    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:20:58.403021    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:20:58.417880    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:20:58.417891    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:20:58.442561    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:20:58.442571    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:20:58.454173    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:20:58.454184    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:00.968301    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:05.970560    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:05.970720    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:05.984710    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:21:05.984783    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:05.998479    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:21:05.998547    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:06.008837    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:21:06.008904    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:06.019066    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:21:06.019139    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:06.029823    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:21:06.029889    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:06.040516    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:21:06.040589    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:06.057010    4028 logs.go:276] 0 containers: []
	W0729 04:21:06.057028    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:06.057090    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:06.067998    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:21:06.068020    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:06.068025    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:06.072706    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:21:06.072712    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:21:06.085868    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:21:06.085881    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:21:06.099252    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:21:06.099263    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:21:06.113526    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:21:06.113541    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:21:06.127974    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:21:06.127987    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:21:06.139833    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:06.139845    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:06.175023    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:21:06.175035    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:21:06.186522    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:21:06.186531    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:21:06.206879    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:21:06.206889    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:21:06.232687    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:21:06.232699    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:21:06.244577    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:21:06.244588    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:06.256323    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:06.256336    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:06.293517    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:21:06.293526    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:21:06.304925    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:21:06.304937    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:21:06.323967    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:21:06.323977    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:21:06.335684    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:06.335695    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:08.860491    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:13.862700    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:13.863023    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:13.892677    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:21:13.892813    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:13.911781    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:21:13.911883    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:13.925943    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:21:13.926023    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:13.942639    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:21:13.942719    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:13.953911    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:21:13.953984    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:13.966126    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:21:13.966194    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:13.976140    4028 logs.go:276] 0 containers: []
	W0729 04:21:13.976150    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:13.976204    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:13.987802    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:21:13.987820    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:13.987825    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:14.022734    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:21:14.022748    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:21:14.036912    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:21:14.036922    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:21:14.048712    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:21:14.048724    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:21:14.073562    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:21:14.073574    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:21:14.092394    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:21:14.092408    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:21:14.103602    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:21:14.103613    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:14.115606    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:14.115617    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:14.153000    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:14.153008    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:14.156923    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:21:14.156931    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:21:14.170367    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:21:14.170377    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:21:14.185458    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:21:14.185493    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:21:14.224305    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:21:14.224318    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:21:14.236230    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:21:14.236242    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:21:14.250417    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:21:14.250429    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:21:14.261615    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:21:14.261626    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:21:14.285230    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:14.285240    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:16.810397    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:21.812510    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:21.812604    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:21:21.827738    4028 logs.go:276] 2 containers: [811ff0c15959 8f2228fa6055]
	I0729 04:21:21.827811    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:21:21.838305    4028 logs.go:276] 2 containers: [5948fdc5b4b3 cae11772d89d]
	I0729 04:21:21.838374    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:21:21.848684    4028 logs.go:276] 1 containers: [690d65bcaa18]
	I0729 04:21:21.848752    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:21:21.859193    4028 logs.go:276] 2 containers: [97efbab3802b 486a2b7332b3]
	I0729 04:21:21.859270    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:21:21.869791    4028 logs.go:276] 1 containers: [b9f1291264bc]
	I0729 04:21:21.869863    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:21:21.880036    4028 logs.go:276] 2 containers: [fd56b1c88793 68f8e4539bd1]
	I0729 04:21:21.880101    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:21:21.889731    4028 logs.go:276] 0 containers: []
	W0729 04:21:21.889742    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:21:21.889803    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:21:21.900535    4028 logs.go:276] 2 containers: [b5c5bd65ef7c 849f5a969b5a]
	I0729 04:21:21.900554    4028 logs.go:123] Gathering logs for etcd [cae11772d89d] ...
	I0729 04:21:21.900560    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cae11772d89d"
	I0729 04:21:21.914745    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:21:21.914757    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:21:21.953021    4028 logs.go:123] Gathering logs for kube-apiserver [811ff0c15959] ...
	I0729 04:21:21.953034    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 811ff0c15959"
	I0729 04:21:21.967695    4028 logs.go:123] Gathering logs for coredns [690d65bcaa18] ...
	I0729 04:21:21.967706    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 690d65bcaa18"
	I0729 04:21:21.979570    4028 logs.go:123] Gathering logs for storage-provisioner [b5c5bd65ef7c] ...
	I0729 04:21:21.979582    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5c5bd65ef7c"
	I0729 04:21:21.990676    4028 logs.go:123] Gathering logs for storage-provisioner [849f5a969b5a] ...
	I0729 04:21:21.990687    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849f5a969b5a"
	I0729 04:21:22.002269    4028 logs.go:123] Gathering logs for kube-controller-manager [fd56b1c88793] ...
	I0729 04:21:22.002283    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd56b1c88793"
	I0729 04:21:22.024998    4028 logs.go:123] Gathering logs for kube-controller-manager [68f8e4539bd1] ...
	I0729 04:21:22.025008    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68f8e4539bd1"
	I0729 04:21:22.043368    4028 logs.go:123] Gathering logs for kube-apiserver [8f2228fa6055] ...
	I0729 04:21:22.043381    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2228fa6055"
	I0729 04:21:22.067782    4028 logs.go:123] Gathering logs for etcd [5948fdc5b4b3] ...
	I0729 04:21:22.067796    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5948fdc5b4b3"
	I0729 04:21:22.081986    4028 logs.go:123] Gathering logs for kube-scheduler [97efbab3802b] ...
	I0729 04:21:22.081999    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97efbab3802b"
	I0729 04:21:22.093916    4028 logs.go:123] Gathering logs for kube-scheduler [486a2b7332b3] ...
	I0729 04:21:22.093926    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 486a2b7332b3"
	I0729 04:21:22.118853    4028 logs.go:123] Gathering logs for kube-proxy [b9f1291264bc] ...
	I0729 04:21:22.118863    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f1291264bc"
	I0729 04:21:22.130021    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:21:22.130034    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:21:22.151274    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:21:22.151282    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:21:22.188141    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:21:22.188149    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:21:22.192366    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:21:22.192376    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:21:24.706003    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:29.708269    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:29.708356    4028 kubeadm.go:597] duration metric: took 4m3.869230542s to restartPrimaryControlPlane
	W0729 04:21:29.708456    4028 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 04:21:29.708493    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 04:21:30.725253    4028 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.016776375s)
	I0729 04:21:30.725317    4028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 04:21:30.730555    4028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:21:30.733574    4028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:21:30.736220    4028 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:21:30.736225    4028 kubeadm.go:157] found existing configuration files:
	
	I0729 04:21:30.736245    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf
	I0729 04:21:30.739036    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:21:30.739069    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:21:30.742740    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf
	I0729 04:21:30.745895    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:21:30.745941    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:21:30.748919    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf
	I0729 04:21:30.751643    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:21:30.751669    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:21:30.755061    4028 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf
	I0729 04:21:30.757671    4028 kubeadm.go:163] "https://control-plane.minikube.internal:50517" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50517 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:21:30.757696    4028 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:21:30.760529    4028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 04:21:30.778738    4028 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 04:21:30.778890    4028 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 04:21:30.825849    4028 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 04:21:30.825909    4028 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 04:21:30.825970    4028 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 04:21:30.877160    4028 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 04:21:30.881140    4028 out.go:204]   - Generating certificates and keys ...
	I0729 04:21:30.881173    4028 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 04:21:30.881205    4028 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 04:21:30.881241    4028 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 04:21:30.881278    4028 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 04:21:30.881328    4028 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 04:21:30.881402    4028 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 04:21:30.881431    4028 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 04:21:30.881466    4028 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 04:21:30.881502    4028 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 04:21:30.881540    4028 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 04:21:30.881562    4028 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 04:21:30.881590    4028 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 04:21:30.955454    4028 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 04:21:31.011445    4028 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 04:21:31.154520    4028 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 04:21:31.240640    4028 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 04:21:31.269941    4028 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 04:21:31.270330    4028 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 04:21:31.270353    4028 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 04:21:31.352673    4028 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 04:21:31.355922    4028 out.go:204]   - Booting up control plane ...
	I0729 04:21:31.355967    4028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 04:21:31.356005    4028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 04:21:31.356045    4028 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 04:21:31.356095    4028 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 04:21:31.356359    4028 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 04:21:36.358787    4028 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002114 seconds
	I0729 04:21:36.358935    4028 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 04:21:36.363546    4028 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 04:21:36.871144    4028 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 04:21:36.871253    4028 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-338000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 04:21:37.376263    4028 kubeadm.go:310] [bootstrap-token] Using token: zaydr7.hxiuzrvd5ftnnr8w
	I0729 04:21:37.382041    4028 out.go:204]   - Configuring RBAC rules ...
	I0729 04:21:37.382107    4028 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 04:21:37.382166    4028 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 04:21:37.390208    4028 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 04:21:37.391338    4028 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 04:21:37.392277    4028 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 04:21:37.394053    4028 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 04:21:37.397836    4028 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 04:21:37.543039    4028 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 04:21:37.780197    4028 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 04:21:37.780790    4028 kubeadm.go:310] 
	I0729 04:21:37.780825    4028 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 04:21:37.780853    4028 kubeadm.go:310] 
	I0729 04:21:37.780959    4028 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 04:21:37.780968    4028 kubeadm.go:310] 
	I0729 04:21:37.780980    4028 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 04:21:37.781023    4028 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 04:21:37.781100    4028 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 04:21:37.781104    4028 kubeadm.go:310] 
	I0729 04:21:37.781163    4028 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 04:21:37.781169    4028 kubeadm.go:310] 
	I0729 04:21:37.781210    4028 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 04:21:37.781217    4028 kubeadm.go:310] 
	I0729 04:21:37.781288    4028 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 04:21:37.781320    4028 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 04:21:37.781404    4028 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 04:21:37.781407    4028 kubeadm.go:310] 
	I0729 04:21:37.781443    4028 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 04:21:37.781494    4028 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 04:21:37.781501    4028 kubeadm.go:310] 
	I0729 04:21:37.781565    4028 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zaydr7.hxiuzrvd5ftnnr8w \
	I0729 04:21:37.781620    4028 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e5aa2d5aa27d88407c50ef5c55a2dae7e3993515072a6e61b6476ae55fad38d6 \
	I0729 04:21:37.781634    4028 kubeadm.go:310] 	--control-plane 
	I0729 04:21:37.781641    4028 kubeadm.go:310] 
	I0729 04:21:37.781680    4028 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 04:21:37.781683    4028 kubeadm.go:310] 
	I0729 04:21:37.781737    4028 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zaydr7.hxiuzrvd5ftnnr8w \
	I0729 04:21:37.781817    4028 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e5aa2d5aa27d88407c50ef5c55a2dae7e3993515072a6e61b6476ae55fad38d6 
	I0729 04:21:37.781884    4028 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 04:21:37.781925    4028 cni.go:84] Creating CNI manager for ""
	I0729 04:21:37.781934    4028 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:21:37.785309    4028 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 04:21:37.792311    4028 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 04:21:37.795349    4028 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 04:21:37.800345    4028 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 04:21:37.800402    4028 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 04:21:37.800414    4028 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-338000 minikube.k8s.io/updated_at=2024_07_29T04_21_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=stopped-upgrade-338000 minikube.k8s.io/primary=true
	I0729 04:21:37.841935    4028 ops.go:34] apiserver oom_adj: -16
	I0729 04:21:37.841953    4028 kubeadm.go:1113] duration metric: took 41.592917ms to wait for elevateKubeSystemPrivileges
	I0729 04:21:37.841962    4028 kubeadm.go:394] duration metric: took 4m12.016227s to StartCluster
	I0729 04:21:37.841974    4028 settings.go:142] acquiring lock: {Name:mkb57b03ccb64deee52152ed8ac01af4d9e1ee07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:21:37.842057    4028 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:21:37.843148    4028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/kubeconfig: {Name:mkc1463454d977493e341af62af023d087f8e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:21:37.843465    4028 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:21:37.843531    4028 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 04:21:37.843586    4028 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-338000"
	I0729 04:21:37.843598    4028 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-338000"
	W0729 04:21:37.843601    4028 addons.go:243] addon storage-provisioner should already be in state true
	I0729 04:21:37.843611    4028 host.go:66] Checking if "stopped-upgrade-338000" exists ...
	I0729 04:21:37.843609    4028 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:21:37.843632    4028 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-338000"
	I0729 04:21:37.843674    4028 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-338000"
	I0729 04:21:37.844538    4028 kapi.go:59] client config for stopped-upgrade-338000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/profiles/stopped-upgrade-338000/client.key", CAFile:"/Users/jenkins/minikube-integration/19336-945/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043bc080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:21:37.844652    4028 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-338000"
	W0729 04:21:37.844665    4028 addons.go:243] addon default-storageclass should already be in state true
	I0729 04:21:37.844673    4028 host.go:66] Checking if "stopped-upgrade-338000" exists ...
	I0729 04:21:37.847309    4028 out.go:177] * Verifying Kubernetes components...
	I0729 04:21:37.847687    4028 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 04:21:37.851311    4028 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 04:21:37.851318    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	I0729 04:21:37.855230    4028 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:21:37.859358    4028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:21:37.863313    4028 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:21:37.863321    4028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 04:21:37.863328    4028 sshutil.go:53] new ssh client: &{IP:localhost Port:50482 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/stopped-upgrade-338000/id_rsa Username:docker}
	I0729 04:21:37.946356    4028 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:21:37.951507    4028 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:21:37.951546    4028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:21:37.955746    4028 api_server.go:72] duration metric: took 112.273791ms to wait for apiserver process to appear ...
	I0729 04:21:37.955754    4028 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:21:37.955760    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:37.968973    4028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 04:21:38.002077    4028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:21:42.957743    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:42.957780    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:47.957969    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:47.958019    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:52.958720    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:52.958764    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:21:57.959211    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:21:57.959252    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:02.960374    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:02.960427    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:07.961355    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:07.961378    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 04:22:08.320609    4028 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 04:22:08.325914    4028 out.go:177] * Enabled addons: storage-provisioner
	I0729 04:22:08.332757    4028 addons.go:510] duration metric: took 30.490250084s for enable addons: enabled=[storage-provisioner]
	I0729 04:22:12.962533    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:12.962555    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:17.964407    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:17.964435    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:22.966554    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:22.966593    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:27.968737    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:27.968767    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:32.969825    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:32.969865    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:37.971844    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:37.971933    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:22:37.982866    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:22:37.982936    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:22:37.993796    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:22:37.993871    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:22:38.004149    4028 logs.go:276] 2 containers: [1503ff4eb402 d09946f99776]
	I0729 04:22:38.004220    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:22:38.014402    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:22:38.014469    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:22:38.025044    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:22:38.025112    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:22:38.040877    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:22:38.040945    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:22:38.055211    4028 logs.go:276] 0 containers: []
	W0729 04:22:38.055224    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:22:38.055284    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:22:38.065493    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:22:38.065508    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:22:38.065513    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:22:38.080863    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:22:38.080874    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:22:38.092641    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:22:38.092653    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:22:38.104283    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:22:38.104294    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:22:38.121858    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:22:38.121869    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:22:38.133268    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:22:38.133279    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:22:38.156788    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:22:38.156796    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:22:38.192190    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:22:38.192204    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:22:38.207321    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:22:38.207331    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:22:38.219051    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:22:38.219066    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:22:38.230817    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:22:38.230827    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:22:38.246498    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:22:38.246510    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:22:38.282325    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:22:38.282335    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:22:40.786563    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:45.788061    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:45.788479    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:22:45.821969    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:22:45.822100    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:22:45.841733    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:22:45.841825    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:22:45.856566    4028 logs.go:276] 2 containers: [1503ff4eb402 d09946f99776]
	I0729 04:22:45.856633    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:22:45.868851    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:22:45.868912    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:22:45.879735    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:22:45.879824    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:22:45.890293    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:22:45.890352    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:22:45.900677    4028 logs.go:276] 0 containers: []
	W0729 04:22:45.900689    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:22:45.900743    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:22:45.911174    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:22:45.911187    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:22:45.911193    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:22:45.944913    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:22:45.944925    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:22:45.961162    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:22:45.961173    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:22:45.973037    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:22:45.973052    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:22:45.993953    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:22:45.993965    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:22:46.027623    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:22:46.027631    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:22:46.041976    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:22:46.041988    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:22:46.058392    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:22:46.058406    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:22:46.070016    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:22:46.070030    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:22:46.090307    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:22:46.090319    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:22:46.102668    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:22:46.102684    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:22:46.126894    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:22:46.126904    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:22:46.139308    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:22:46.139322    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:22:48.643562    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:22:53.645657    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:22:53.645871    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:22:53.675575    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:22:53.675692    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:22:53.696577    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:22:53.696657    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:22:53.709675    4028 logs.go:276] 2 containers: [1503ff4eb402 d09946f99776]
	I0729 04:22:53.709745    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:22:53.721119    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:22:53.721187    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:22:53.731641    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:22:53.731711    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:22:53.741368    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:22:53.741428    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:22:53.752408    4028 logs.go:276] 0 containers: []
	W0729 04:22:53.752421    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:22:53.752474    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:22:53.763142    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:22:53.763158    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:22:53.763163    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:22:53.780290    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:22:53.780302    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:22:53.814479    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:22:53.814488    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:22:53.818461    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:22:53.818469    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:22:53.832404    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:22:53.832416    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:22:53.846595    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:22:53.846607    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:22:53.857640    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:22:53.857652    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:22:53.872447    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:22:53.872458    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:22:53.883785    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:22:53.883794    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:22:53.906789    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:22:53.906798    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:22:53.943684    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:22:53.943698    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:22:53.955532    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:22:53.955545    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:22:53.966970    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:22:53.966982    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:22:56.479200    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:23:01.481908    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:23:01.482344    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:23:01.520759    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:23:01.520893    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:23:01.541461    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:23:01.541549    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:23:01.556745    4028 logs.go:276] 2 containers: [1503ff4eb402 d09946f99776]
	I0729 04:23:01.556821    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:23:01.569187    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:23:01.569255    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:23:01.579789    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:23:01.579861    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:23:01.590386    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:23:01.590451    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:23:01.600650    4028 logs.go:276] 0 containers: []
	W0729 04:23:01.600662    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:23:01.600714    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:23:01.630699    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:23:01.630714    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:23:01.630719    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:23:01.646133    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:23:01.646144    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:23:01.663070    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:23:01.663083    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:23:01.676412    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:23:01.676424    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:23:01.691361    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:23:01.691375    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:23:01.703470    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:23:01.703484    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:23:01.720753    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:23:01.720763    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:23:01.736534    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:23:01.736544    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:23:01.748458    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:23:01.748473    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:23:01.785532    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:23:01.785540    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:23:01.789773    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:23:01.789781    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:23:01.827563    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:23:01.827573    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:23:01.841866    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:23:01.841879    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:23:04.369117    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:23:09.371009    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:23:09.371420    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:23:09.409947    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:23:09.410080    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:23:09.432747    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:23:09.432847    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:23:09.447522    4028 logs.go:276] 2 containers: [1503ff4eb402 d09946f99776]
	I0729 04:23:09.447600    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:23:09.459896    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:23:09.459968    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:23:09.471049    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:23:09.471115    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:23:09.481339    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:23:09.481416    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:23:09.491128    4028 logs.go:276] 0 containers: []
	W0729 04:23:09.491139    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:23:09.491191    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:23:09.501511    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:23:09.501527    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:23:09.501533    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:23:09.524446    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:23:09.524456    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:23:09.528643    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:23:09.528652    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:23:09.542633    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:23:09.542643    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:23:09.553731    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:23:09.553744    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:23:09.564831    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:23:09.564845    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:23:09.579829    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:23:09.579838    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:23:09.591401    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:23:09.591411    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:23:09.625121    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:23:09.625130    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:23:09.662713    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:23:09.662727    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:23:09.677190    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:23:09.677202    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:23:09.698803    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:23:09.698816    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:23:09.710661    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:23:09.710672    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:23:12.223863    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:23:17.225334    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:23:17.225675    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:23:17.263995    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:23:17.264136    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:23:17.284369    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:23:17.284463    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:23:17.298855    4028 logs.go:276] 2 containers: [1503ff4eb402 d09946f99776]
	I0729 04:23:17.298931    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:23:17.312002    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:23:17.312071    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:23:17.326588    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:23:17.326659    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:23:17.337240    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:23:17.337303    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:23:17.347579    4028 logs.go:276] 0 containers: []
	W0729 04:23:17.347590    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:23:17.347649    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:23:17.358255    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:23:17.358269    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:23:17.358275    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:23:17.362534    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:23:17.362543    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:23:17.383142    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:23:17.383155    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:23:17.399194    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:23:17.399209    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:23:17.411186    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:23:17.411203    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:23:17.422317    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:23:17.422326    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:23:17.446187    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:23:17.446197    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:23:17.459983    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:23:17.459992    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:23:17.494935    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:23:17.494942    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:23:17.506474    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:23:17.506488    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:23:17.521474    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:23:17.521490    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:23:17.532985    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:23:17.532994    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:23:17.557772    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:23:17.557779    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:23:20.093490    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:23:25.095906    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:23:25.096338    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:23:25.137282    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:23:25.137425    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:23:25.159879    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:23:25.159969    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:23:25.175040    4028 logs.go:276] 2 containers: [1503ff4eb402 d09946f99776]
	I0729 04:23:25.175109    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:23:25.187652    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:23:25.187729    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:23:25.199016    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:23:25.199087    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:23:25.209886    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:23:25.209950    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:23:25.220705    4028 logs.go:276] 0 containers: []
	W0729 04:23:25.220717    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:23:25.220772    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:23:25.231663    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:23:25.231679    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:23:25.231685    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:23:25.249013    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:23:25.249027    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:23:25.253885    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:23:25.253894    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:23:25.288535    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:23:25.288549    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:23:25.303552    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:23:25.303565    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:23:25.315496    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:23:25.315509    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:23:25.327606    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:23:25.327620    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:23:25.339459    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:23:25.339472    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:23:25.356819    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:23:25.356829    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:23:25.368804    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:23:25.368817    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:23:25.406026    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:23:25.406035    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:23:25.420617    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:23:25.420628    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:23:25.436031    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:23:25.436044    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:23:27.962269    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:23:32.964535    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:23:32.964877    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:23:32.999050    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:23:32.999183    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:23:33.018455    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:23:33.018546    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:23:33.034528    4028 logs.go:276] 2 containers: [1503ff4eb402 d09946f99776]
	I0729 04:23:33.034604    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:23:33.046083    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:23:33.046148    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:23:33.056716    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:23:33.056785    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:23:33.067474    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:23:33.067543    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:23:33.082130    4028 logs.go:276] 0 containers: []
	W0729 04:23:33.082141    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:23:33.082199    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:23:33.092584    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:23:33.092599    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:23:33.092603    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:23:33.106590    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:23:33.106602    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:23:33.122309    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:23:33.122320    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:23:33.141279    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:23:33.141289    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:23:33.152880    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:23:33.152893    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:23:33.163864    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:23:33.163877    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:23:33.175629    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:23:33.175638    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:23:33.193189    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:23:33.193199    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:23:33.226866    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:23:33.226875    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:23:33.231018    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:23:33.231026    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:23:33.266346    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:23:33.266369    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:23:33.278351    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:23:33.278364    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:23:33.301540    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:23:33.301550    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:23:35.816412    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:23:40.818733    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:23:40.819130    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:23:40.854330    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:23:40.854487    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:23:40.873844    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:23:40.873935    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:23:40.888624    4028 logs.go:276] 2 containers: [1503ff4eb402 d09946f99776]
	I0729 04:23:40.888689    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:23:40.907373    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:23:40.907446    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:23:40.918344    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:23:40.918423    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:23:40.929440    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:23:40.929510    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:23:40.940659    4028 logs.go:276] 0 containers: []
	W0729 04:23:40.940670    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:23:40.940731    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:23:40.952086    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:23:40.952101    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:23:40.952106    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:23:40.970558    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:23:40.970569    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:23:40.982940    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:23:40.982952    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:23:40.995142    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:23:40.995152    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:23:41.030985    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:23:41.030995    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:23:41.045617    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:23:41.045628    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:23:41.057918    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:23:41.057928    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:23:41.069648    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:23:41.069660    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:23:41.084766    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:23:41.084777    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:23:41.089203    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:23:41.089212    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:23:41.129310    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:23:41.129323    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:23:41.144686    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:23:41.144697    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:23:41.156885    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:23:41.156898    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:23:43.681801    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:23:48.683231    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:23:48.683557    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:23:48.719212    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:23:48.719338    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:23:48.737343    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:23:48.737427    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:23:48.751301    4028 logs.go:276] 2 containers: [1503ff4eb402 d09946f99776]
	I0729 04:23:48.751366    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:23:48.764874    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:23:48.764937    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:23:48.776477    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:23:48.776539    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:23:48.787256    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:23:48.787319    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:23:48.797703    4028 logs.go:276] 0 containers: []
	W0729 04:23:48.797713    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:23:48.797771    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:23:48.808472    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:23:48.808487    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:23:48.808492    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:23:48.861839    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:23:48.861860    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:23:48.891454    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:23:48.891470    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:23:48.914140    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:23:48.914151    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:23:48.940910    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:23:48.940922    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:23:48.976683    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:23:48.976701    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:23:48.985683    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:23:48.985700    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:23:49.007705    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:23:49.007718    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:23:49.023738    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:23:49.023751    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:23:49.035223    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:23:49.035232    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:23:49.060507    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:23:49.060514    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:23:49.072573    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:23:49.072586    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:23:49.086605    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:23:49.086616    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:23:51.600590    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:23:56.602811    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:23:56.603238    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:23:56.642679    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:23:56.642820    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:23:56.664415    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:23:56.664502    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:23:56.686027    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:23:56.686104    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:23:56.698044    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:23:56.698110    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:23:56.708306    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:23:56.708372    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:23:56.719960    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:23:56.720032    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:23:56.730522    4028 logs.go:276] 0 containers: []
	W0729 04:23:56.730534    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:23:56.730591    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:23:56.741017    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:23:56.741034    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:23:56.741041    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:23:56.752382    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:23:56.752396    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:23:56.772797    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:23:56.772809    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:23:56.785790    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:23:56.785803    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:23:56.800045    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:23:56.800058    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:23:56.811728    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:23:56.811740    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:23:56.823220    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:23:56.823233    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:23:56.835510    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:23:56.835524    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:23:56.840014    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:23:56.840022    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:23:56.851763    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:23:56.851775    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:23:56.867556    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:23:56.867569    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:23:56.902335    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:23:56.902347    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:23:56.916950    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:23:56.916961    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:23:56.928783    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:23:56.928795    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:23:56.953967    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:23:56.953978    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:23:59.490980    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:24:04.492067    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:24:04.492554    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:24:04.534340    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:24:04.534483    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:24:04.556706    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:24:04.556828    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:24:04.573010    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:24:04.573091    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:24:04.585398    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:24:04.585465    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:24:04.596341    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:24:04.596407    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:24:04.607210    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:24:04.607271    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:24:04.617735    4028 logs.go:276] 0 containers: []
	W0729 04:24:04.617746    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:24:04.617806    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:24:04.628302    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:24:04.628324    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:24:04.628329    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:24:04.642407    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:24:04.642418    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:24:04.653361    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:24:04.653374    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:24:04.671044    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:24:04.671057    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:24:04.685375    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:24:04.685384    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:24:04.696520    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:24:04.696532    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:24:04.707923    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:24:04.707937    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:24:04.719611    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:24:04.719622    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:24:04.724374    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:24:04.724382    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:24:04.758251    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:24:04.758263    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:24:04.770366    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:24:04.770376    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:24:04.785032    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:24:04.785045    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:24:04.796237    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:24:04.796249    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:24:04.820090    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:24:04.820097    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:24:04.855326    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:24:04.855339    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:24:07.379010    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:24:12.381746    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:24:12.382206    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:24:12.421784    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:24:12.421924    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:24:12.443383    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:24:12.443496    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:24:12.460683    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:24:12.460760    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:24:12.477546    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:24:12.477609    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:24:12.487952    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:24:12.488024    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:24:12.498683    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:24:12.498749    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:24:12.508900    4028 logs.go:276] 0 containers: []
	W0729 04:24:12.508910    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:24:12.508967    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:24:12.519368    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:24:12.519386    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:24:12.519391    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:24:12.553219    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:24:12.553230    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:24:12.557213    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:24:12.557221    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:24:12.590481    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:24:12.590491    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:24:12.602004    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:24:12.602013    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:24:12.613593    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:24:12.613606    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:24:12.624888    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:24:12.624898    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:24:12.635622    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:24:12.635635    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:24:12.659668    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:24:12.659674    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:24:12.671115    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:24:12.671123    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:24:12.682994    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:24:12.683001    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:24:12.694686    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:24:12.694695    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:24:12.709946    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:24:12.709960    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:24:12.728467    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:24:12.728480    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:24:12.744634    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:24:12.744647    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:24:15.265340    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:24:20.267390    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:24:20.267655    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:24:20.293895    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:24:20.294010    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:24:20.313144    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:24:20.313223    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:24:20.328308    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:24:20.328387    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:24:20.339181    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:24:20.339251    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:24:20.349514    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:24:20.349578    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:24:20.360600    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:24:20.360671    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:24:20.370444    4028 logs.go:276] 0 containers: []
	W0729 04:24:20.370456    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:24:20.370510    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:24:20.380808    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:24:20.380827    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:24:20.380832    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:24:20.418997    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:24:20.419008    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:24:20.430191    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:24:20.430201    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:24:20.441891    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:24:20.441903    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:24:20.453840    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:24:20.453853    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:24:20.487313    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:24:20.487319    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:24:20.502033    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:24:20.502046    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:24:20.516822    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:24:20.516834    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:24:20.541982    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:24:20.541993    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:24:20.546632    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:24:20.546640    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:24:20.567335    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:24:20.567344    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:24:20.582061    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:24:20.582072    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:24:20.593272    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:24:20.593287    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:24:20.604868    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:24:20.604881    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:24:20.616582    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:24:20.616592    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:24:23.132623    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:24:28.134899    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:24:28.135255    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:24:28.170111    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:24:28.170246    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:24:28.190926    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:24:28.191024    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:24:28.205559    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:24:28.205639    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:24:28.217713    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:24:28.217784    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:24:28.229971    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:24:28.230039    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:24:28.240640    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:24:28.240708    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:24:28.252446    4028 logs.go:276] 0 containers: []
	W0729 04:24:28.252457    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:24:28.252520    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:24:28.264957    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:24:28.264973    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:24:28.264978    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:24:28.298480    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:24:28.298487    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:24:28.310031    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:24:28.310041    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:24:28.314616    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:24:28.314623    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:24:28.348909    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:24:28.348918    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:24:28.363490    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:24:28.363505    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:24:28.375422    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:24:28.375436    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:24:28.386343    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:24:28.386355    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:24:28.398178    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:24:28.398189    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:24:28.409768    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:24:28.409779    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:24:28.422269    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:24:28.422281    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:24:28.436432    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:24:28.436442    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:24:28.451344    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:24:28.451358    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:24:28.463102    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:24:28.463115    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:24:28.480528    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:24:28.480537    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:24:31.006691    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:24:36.009213    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:24:36.009333    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:24:36.023062    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:24:36.023135    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:24:36.033593    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:24:36.033656    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:24:36.043892    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:24:36.043966    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:24:36.054463    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:24:36.054531    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:24:36.065343    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:24:36.065413    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:24:36.075814    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:24:36.075881    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:24:36.085694    4028 logs.go:276] 0 containers: []
	W0729 04:24:36.085704    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:24:36.085758    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:24:36.095659    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:24:36.095675    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:24:36.095680    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:24:36.107650    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:24:36.107662    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:24:36.119844    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:24:36.119857    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:24:36.136175    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:24:36.136187    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:24:36.160077    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:24:36.160090    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:24:36.171565    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:24:36.171574    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:24:36.207086    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:24:36.207095    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:24:36.220893    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:24:36.220902    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:24:36.232177    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:24:36.232187    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:24:36.246968    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:24:36.246980    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:24:36.258979    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:24:36.258990    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:24:36.263285    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:24:36.263293    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:24:36.297600    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:24:36.297611    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:24:36.319384    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:24:36.319397    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:24:36.332492    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:24:36.332501    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:24:38.857279    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:24:43.859394    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:24:43.859835    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:24:43.900329    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:24:43.900445    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:24:43.921581    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:24:43.921687    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:24:43.939980    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:24:43.940060    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:24:43.956841    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:24:43.956908    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:24:43.967364    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:24:43.967431    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:24:43.983147    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:24:43.983212    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:24:43.993213    4028 logs.go:276] 0 containers: []
	W0729 04:24:43.993228    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:24:43.993277    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:24:44.003762    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:24:44.003779    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:24:44.003784    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:24:44.015137    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:24:44.015146    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:24:44.032542    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:24:44.032553    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:24:44.048420    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:24:44.048433    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:24:44.062893    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:24:44.062902    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:24:44.078746    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:24:44.078756    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:24:44.102841    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:24:44.102852    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:24:44.107289    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:24:44.107294    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:24:44.119267    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:24:44.119278    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:24:44.130373    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:24:44.130383    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:24:44.144738    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:24:44.144748    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:24:44.156682    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:24:44.156694    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:24:44.170332    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:24:44.170345    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:24:44.207531    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:24:44.207542    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:24:44.240408    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:24:44.240422    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:24:46.756594    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:24:51.759096    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:24:51.759228    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:24:51.770532    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:24:51.770587    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:24:51.782163    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:24:51.782232    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:24:51.792818    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:24:51.792885    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:24:51.804555    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:24:51.804621    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:24:51.814988    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:24:51.815046    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:24:51.825531    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:24:51.825587    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:24:51.834962    4028 logs.go:276] 0 containers: []
	W0729 04:24:51.834974    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:24:51.835023    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:24:51.845585    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:24:51.845602    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:24:51.845607    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:24:51.857502    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:24:51.857514    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:24:51.873309    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:24:51.873320    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:24:51.886979    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:24:51.886989    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:24:51.900807    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:24:51.900820    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:24:51.911756    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:24:51.911767    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:24:51.926648    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:24:51.926658    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:24:51.938141    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:24:51.938151    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:24:51.961775    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:24:51.961786    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:24:51.974326    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:24:51.974352    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:24:51.978838    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:24:51.978846    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:24:52.013513    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:24:52.013525    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:24:52.035918    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:24:52.035929    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:24:52.072608    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:24:52.072616    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:24:52.084919    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:24:52.084931    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:24:54.598324    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:24:59.600919    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:24:59.601331    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:24:59.645552    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:24:59.645698    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:24:59.666520    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:24:59.666627    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:24:59.681332    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:24:59.681413    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:24:59.693351    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:24:59.693422    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:24:59.703825    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:24:59.703882    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:24:59.714127    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:24:59.714194    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:24:59.724295    4028 logs.go:276] 0 containers: []
	W0729 04:24:59.724305    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:24:59.724351    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:24:59.735275    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:24:59.735293    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:24:59.735300    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:24:59.747437    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:24:59.747449    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:24:59.758793    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:24:59.758802    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:24:59.783423    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:24:59.783432    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:24:59.787346    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:24:59.787353    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:24:59.799005    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:24:59.799018    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:24:59.810389    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:24:59.810402    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:24:59.845192    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:24:59.845203    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:24:59.856760    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:24:59.856770    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:24:59.874432    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:24:59.874445    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:24:59.910424    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:24:59.910431    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:24:59.924964    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:24:59.924976    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:24:59.939403    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:24:59.939416    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:24:59.956644    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:24:59.956658    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:24:59.971354    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:24:59.971367    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:25:02.490145    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:25:07.492244    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:25:07.492662    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:25:07.532340    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:25:07.532507    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:25:07.555046    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:25:07.555161    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:25:07.570848    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:25:07.570922    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:25:07.583296    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:25:07.583360    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:25:07.595683    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:25:07.595749    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:25:07.606371    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:25:07.606439    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:25:07.618648    4028 logs.go:276] 0 containers: []
	W0729 04:25:07.618658    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:25:07.618709    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:25:07.630008    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:25:07.630029    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:25:07.630035    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:25:07.644283    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:25:07.644295    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:25:07.655597    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:25:07.655609    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:25:07.666929    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:25:07.666940    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:25:07.691465    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:25:07.691475    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:25:07.705673    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:25:07.705683    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:25:07.717814    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:25:07.717828    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:25:07.730024    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:25:07.730037    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:25:07.734445    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:25:07.734453    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:25:07.768077    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:25:07.768089    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:25:07.779677    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:25:07.779689    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:25:07.800784    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:25:07.800794    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:25:07.834744    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:25:07.834752    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:25:07.849660    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:25:07.849670    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:25:07.861607    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:25:07.861621    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:25:10.377152    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:25:15.379668    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:25:15.379771    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:25:15.407287    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:25:15.407341    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:25:15.422362    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:25:15.422437    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:25:15.434344    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:25:15.434415    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:25:15.446834    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:25:15.446885    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:25:15.457993    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:25:15.458047    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:25:15.471198    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:25:15.471254    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:25:15.483931    4028 logs.go:276] 0 containers: []
	W0729 04:25:15.483942    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:25:15.483984    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:25:15.495725    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:25:15.495738    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:25:15.495744    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:25:15.510088    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:25:15.510098    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:25:15.521947    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:25:15.521956    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:25:15.533929    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:25:15.533940    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:25:15.546510    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:25:15.546520    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:25:15.566051    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:25:15.566061    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:25:15.570738    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:25:15.570749    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:25:15.605405    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:25:15.605415    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:25:15.619923    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:25:15.619933    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:25:15.655007    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:25:15.655016    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:25:15.667989    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:25:15.668002    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:25:15.683201    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:25:15.683218    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:25:15.699945    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:25:15.699963    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:25:15.713355    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:25:15.713368    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:25:15.739578    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:25:15.739601    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:25:18.255822    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:25:23.257891    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:25:23.257978    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:25:23.270193    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:25:23.270270    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:25:23.282520    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:25:23.282598    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:25:23.295293    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:25:23.295372    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:25:23.307602    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:25:23.307660    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:25:23.318142    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:25:23.318200    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:25:23.329061    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:25:23.329133    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:25:23.339420    4028 logs.go:276] 0 containers: []
	W0729 04:25:23.339431    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:25:23.339490    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:25:23.350777    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:25:23.350793    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:25:23.350799    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:25:23.362837    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:25:23.362846    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:25:23.380930    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:25:23.380940    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:25:23.392646    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:25:23.392660    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:25:23.409719    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:25:23.409729    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:25:23.424454    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:25:23.424463    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:25:23.459759    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:25:23.459770    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:25:23.471682    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:25:23.471693    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:25:23.485309    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:25:23.485318    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:25:23.499174    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:25:23.499184    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:25:23.511388    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:25:23.511399    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:25:23.526112    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:25:23.526122    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:25:23.537528    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:25:23.537539    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:25:23.572067    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:25:23.572077    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:25:23.595440    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:25:23.595449    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:25:26.101306    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:25:31.103369    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:25:31.103826    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:25:31.141378    4028 logs.go:276] 1 containers: [d647a062be10]
	I0729 04:25:31.141539    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:25:31.161261    4028 logs.go:276] 1 containers: [e2e048041390]
	I0729 04:25:31.161353    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:25:31.175990    4028 logs.go:276] 4 containers: [4d5ad41d5a9c c46db4e4d78b 1503ff4eb402 d09946f99776]
	I0729 04:25:31.176073    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:25:31.188686    4028 logs.go:276] 1 containers: [fc169c0c2174]
	I0729 04:25:31.188761    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:25:31.199077    4028 logs.go:276] 1 containers: [3f312c40ad82]
	I0729 04:25:31.199155    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:25:31.209250    4028 logs.go:276] 1 containers: [d41466ebf5b2]
	I0729 04:25:31.209314    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:25:31.221163    4028 logs.go:276] 0 containers: []
	W0729 04:25:31.221177    4028 logs.go:278] No container was found matching "kindnet"
	I0729 04:25:31.221234    4028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:25:31.232386    4028 logs.go:276] 1 containers: [cc0a68aa7fcb]
	I0729 04:25:31.232403    4028 logs.go:123] Gathering logs for kube-controller-manager [d41466ebf5b2] ...
	I0729 04:25:31.232408    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d41466ebf5b2"
	I0729 04:25:31.253310    4028 logs.go:123] Gathering logs for kubelet ...
	I0729 04:25:31.253320    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:25:31.288804    4028 logs.go:123] Gathering logs for dmesg ...
	I0729 04:25:31.288811    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:25:31.292990    4028 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:25:31.292999    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:25:31.330367    4028 logs.go:123] Gathering logs for coredns [c46db4e4d78b] ...
	I0729 04:25:31.330379    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c46db4e4d78b"
	I0729 04:25:31.341783    4028 logs.go:123] Gathering logs for etcd [e2e048041390] ...
	I0729 04:25:31.341793    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2e048041390"
	I0729 04:25:31.354957    4028 logs.go:123] Gathering logs for storage-provisioner [cc0a68aa7fcb] ...
	I0729 04:25:31.354970    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc0a68aa7fcb"
	I0729 04:25:31.366140    4028 logs.go:123] Gathering logs for Docker ...
	I0729 04:25:31.366149    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:25:31.390388    4028 logs.go:123] Gathering logs for kube-scheduler [fc169c0c2174] ...
	I0729 04:25:31.390395    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc169c0c2174"
	I0729 04:25:31.404851    4028 logs.go:123] Gathering logs for kube-proxy [3f312c40ad82] ...
	I0729 04:25:31.404862    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f312c40ad82"
	I0729 04:25:31.416301    4028 logs.go:123] Gathering logs for container status ...
	I0729 04:25:31.416311    4028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:25:31.428485    4028 logs.go:123] Gathering logs for kube-apiserver [d647a062be10] ...
	I0729 04:25:31.428496    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d647a062be10"
	I0729 04:25:31.442741    4028 logs.go:123] Gathering logs for coredns [4d5ad41d5a9c] ...
	I0729 04:25:31.442754    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5ad41d5a9c"
	I0729 04:25:31.454509    4028 logs.go:123] Gathering logs for coredns [1503ff4eb402] ...
	I0729 04:25:31.454521    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1503ff4eb402"
	I0729 04:25:31.465946    4028 logs.go:123] Gathering logs for coredns [d09946f99776] ...
	I0729 04:25:31.465959    4028 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d09946f99776"
	I0729 04:25:33.979263    4028 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:25:38.981808    4028 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:25:38.985812    4028 out.go:177] 
	W0729 04:25:38.988784    4028 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 04:25:38.988790    4028 out.go:239] * 
	* 
	W0729 04:25:38.989229    4028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:25:39.002816    4028 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-338000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (564.03s)

                                                
                                    
x
+
TestPause/serial/Start (10s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-661000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-661000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.946935959s)

                                                
                                                
-- stdout --
	* [pause-661000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-661000" primary control-plane node in "pause-661000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-661000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-661000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-661000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-661000 -n pause-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-661000 -n pause-661000: exit status 7 (55.032583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-834000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-834000 --driver=qemu2 : exit status 80 (9.841161167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-834000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-834000" primary control-plane node in "NoKubernetes-834000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-834000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-834000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-834000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-834000 -n NoKubernetes-834000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-834000 -n NoKubernetes-834000: exit status 7 (64.912583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-834000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-834000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-834000 --no-kubernetes --driver=qemu2 : exit status 80 (5.243141084s)

                                                
                                                
-- stdout --
	* [NoKubernetes-834000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-834000
	* Restarting existing qemu2 VM for "NoKubernetes-834000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-834000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-834000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-834000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-834000 -n NoKubernetes-834000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-834000 -n NoKubernetes-834000: exit status 7 (63.739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-834000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-834000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-834000 --no-kubernetes --driver=qemu2 : exit status 80 (5.248382125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-834000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-834000
	* Restarting existing qemu2 VM for "NoKubernetes-834000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-834000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-834000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-834000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-834000 -n NoKubernetes-834000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-834000 -n NoKubernetes-834000: exit status 7 (67.110417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-834000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-834000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-834000 --driver=qemu2 : exit status 80 (5.22493975s)

                                                
                                                
-- stdout --
	* [NoKubernetes-834000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-834000
	* Restarting existing qemu2 VM for "NoKubernetes-834000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-834000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-834000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-834000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-834000 -n NoKubernetes-834000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-834000 -n NoKubernetes-834000: exit status 7 (30.50625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-834000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.841514416s)

                                                
                                                
-- stdout --
	* [auto-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-418000" primary control-plane node in "auto-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:23:50.964906    4276 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:23:50.965032    4276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:23:50.965040    4276 out.go:304] Setting ErrFile to fd 2...
	I0729 04:23:50.965042    4276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:23:50.965181    4276 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:23:50.966220    4276 out.go:298] Setting JSON to false
	I0729 04:23:50.982354    4276 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3193,"bootTime":1722249037,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:23:50.982427    4276 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:23:50.989431    4276 out.go:177] * [auto-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:23:50.996340    4276 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:23:50.996377    4276 notify.go:220] Checking for updates...
	I0729 04:23:51.003348    4276 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:23:51.006324    4276 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:23:51.009311    4276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:23:51.012282    4276 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:23:51.015313    4276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:23:51.018793    4276 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:23:51.018862    4276 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:23:51.018913    4276 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:23:51.022292    4276 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:23:51.029322    4276 start.go:297] selected driver: qemu2
	I0729 04:23:51.029332    4276 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:23:51.029341    4276 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:23:51.031558    4276 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:23:51.032734    4276 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:23:51.036460    4276 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:23:51.036504    4276 cni.go:84] Creating CNI manager for ""
	I0729 04:23:51.036513    4276 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:23:51.036517    4276 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:23:51.036545    4276 start.go:340] cluster config:
	{Name:auto-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:23:51.040138    4276 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:23:51.047338    4276 out.go:177] * Starting "auto-418000" primary control-plane node in "auto-418000" cluster
	I0729 04:23:51.051308    4276 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:23:51.051322    4276 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:23:51.051328    4276 cache.go:56] Caching tarball of preloaded images
	I0729 04:23:51.051386    4276 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:23:51.051391    4276 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:23:51.051436    4276 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/auto-418000/config.json ...
	I0729 04:23:51.051446    4276 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/auto-418000/config.json: {Name:mka8b5112cae4212c4331ad51ba661789446434e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:23:51.051785    4276 start.go:360] acquireMachinesLock for auto-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:23:51.051816    4276 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "auto-418000"
	I0729 04:23:51.051827    4276 start.go:93] Provisioning new machine with config: &{Name:auto-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:23:51.051856    4276 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:23:51.060309    4276 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:23:51.075711    4276 start.go:159] libmachine.API.Create for "auto-418000" (driver="qemu2")
	I0729 04:23:51.075734    4276 client.go:168] LocalClient.Create starting
	I0729 04:23:51.075794    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:23:51.075826    4276 main.go:141] libmachine: Decoding PEM data...
	I0729 04:23:51.075837    4276 main.go:141] libmachine: Parsing certificate...
	I0729 04:23:51.075873    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:23:51.075899    4276 main.go:141] libmachine: Decoding PEM data...
	I0729 04:23:51.075907    4276 main.go:141] libmachine: Parsing certificate...
	I0729 04:23:51.076390    4276 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:23:51.227973    4276 main.go:141] libmachine: Creating SSH key...
	I0729 04:23:51.334787    4276 main.go:141] libmachine: Creating Disk image...
	I0729 04:23:51.334794    4276 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:23:51.335012    4276 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2
	I0729 04:23:51.344381    4276 main.go:141] libmachine: STDOUT: 
	I0729 04:23:51.344413    4276 main.go:141] libmachine: STDERR: 
	I0729 04:23:51.344476    4276 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2 +20000M
	I0729 04:23:51.352344    4276 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:23:51.352362    4276 main.go:141] libmachine: STDERR: 
	I0729 04:23:51.352376    4276 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2
	I0729 04:23:51.352380    4276 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:23:51.352388    4276 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:23:51.352429    4276 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:b5:45:7b:02:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2
	I0729 04:23:51.354038    4276 main.go:141] libmachine: STDOUT: 
	I0729 04:23:51.354054    4276 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:23:51.354073    4276 client.go:171] duration metric: took 278.344ms to LocalClient.Create
	I0729 04:23:53.356381    4276 start.go:128] duration metric: took 2.304529583s to createHost
	I0729 04:23:53.356546    4276 start.go:83] releasing machines lock for "auto-418000", held for 2.304794583s
	W0729 04:23:53.356600    4276 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:23:53.373946    4276 out.go:177] * Deleting "auto-418000" in qemu2 ...
	W0729 04:23:53.398668    4276 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:23:53.398707    4276 start.go:729] Will try again in 5 seconds ...
	I0729 04:23:58.400299    4276 start.go:360] acquireMachinesLock for auto-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:23:58.400904    4276 start.go:364] duration metric: took 497.417µs to acquireMachinesLock for "auto-418000"
	I0729 04:23:58.400999    4276 start.go:93] Provisioning new machine with config: &{Name:auto-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:23:58.401294    4276 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:23:58.411021    4276 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:23:58.461079    4276 start.go:159] libmachine.API.Create for "auto-418000" (driver="qemu2")
	I0729 04:23:58.461148    4276 client.go:168] LocalClient.Create starting
	I0729 04:23:58.461263    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:23:58.461330    4276 main.go:141] libmachine: Decoding PEM data...
	I0729 04:23:58.461362    4276 main.go:141] libmachine: Parsing certificate...
	I0729 04:23:58.461426    4276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:23:58.461480    4276 main.go:141] libmachine: Decoding PEM data...
	I0729 04:23:58.461491    4276 main.go:141] libmachine: Parsing certificate...
	I0729 04:23:58.462157    4276 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:23:58.625622    4276 main.go:141] libmachine: Creating SSH key...
	I0729 04:23:58.715816    4276 main.go:141] libmachine: Creating Disk image...
	I0729 04:23:58.715824    4276 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:23:58.716005    4276 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2
	I0729 04:23:58.725339    4276 main.go:141] libmachine: STDOUT: 
	I0729 04:23:58.725356    4276 main.go:141] libmachine: STDERR: 
	I0729 04:23:58.725420    4276 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2 +20000M
	I0729 04:23:58.733643    4276 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:23:58.733660    4276 main.go:141] libmachine: STDERR: 
	I0729 04:23:58.733670    4276 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2
	I0729 04:23:58.733674    4276 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:23:58.733682    4276 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:23:58.733715    4276 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a0:6c:27:b7:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/auto-418000/disk.qcow2
	I0729 04:23:58.735487    4276 main.go:141] libmachine: STDOUT: 
	I0729 04:23:58.735501    4276 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:23:58.735514    4276 client.go:171] duration metric: took 274.369375ms to LocalClient.Create
	I0729 04:24:00.737670    4276 start.go:128] duration metric: took 2.3364015s to createHost
	I0729 04:24:00.737793    4276 start.go:83] releasing machines lock for "auto-418000", held for 2.336926708s
	W0729 04:24:00.738123    4276 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:00.749845    4276 out.go:177] 
	W0729 04:24:00.753873    4276 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:24:00.753891    4276 out.go:239] * 
	* 
	W0729 04:24:00.755428    4276 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:24:00.765829    4276 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.82979275s)

                                                
                                                
-- stdout --
	* [calico-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-418000" primary control-plane node in "calico-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:24:02.898645    4388 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:24:02.898776    4388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:24:02.898779    4388 out.go:304] Setting ErrFile to fd 2...
	I0729 04:24:02.898782    4388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:24:02.898942    4388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:24:02.900043    4388 out.go:298] Setting JSON to false
	I0729 04:24:02.917309    4388 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3205,"bootTime":1722249037,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:24:02.917397    4388 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:24:02.923333    4388 out.go:177] * [calico-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:24:02.930343    4388 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:24:02.930413    4388 notify.go:220] Checking for updates...
	I0729 04:24:02.937263    4388 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:24:02.940319    4388 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:24:02.943250    4388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:24:02.946325    4388 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:24:02.949318    4388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:24:02.952494    4388 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:24:02.952569    4388 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:24:02.952620    4388 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:24:02.956239    4388 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:24:02.962242    4388 start.go:297] selected driver: qemu2
	I0729 04:24:02.962250    4388 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:24:02.962257    4388 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:24:02.964610    4388 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:24:02.967271    4388 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:24:02.970342    4388 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:24:02.970376    4388 cni.go:84] Creating CNI manager for "calico"
	I0729 04:24:02.970383    4388 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 04:24:02.970417    4388 start.go:340] cluster config:
	{Name:calico-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:24:02.974100    4388 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:24:02.981293    4388 out.go:177] * Starting "calico-418000" primary control-plane node in "calico-418000" cluster
	I0729 04:24:02.985339    4388 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:24:02.985360    4388 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:24:02.985371    4388 cache.go:56] Caching tarball of preloaded images
	I0729 04:24:02.985430    4388 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:24:02.985436    4388 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:24:02.985502    4388 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/calico-418000/config.json ...
	I0729 04:24:02.985516    4388 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/calico-418000/config.json: {Name:mk04a7dfa8596ac655be8dacc062ca7caa370e5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:24:02.985743    4388 start.go:360] acquireMachinesLock for calico-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:24:02.985777    4388 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "calico-418000"
	I0729 04:24:02.985789    4388 start.go:93] Provisioning new machine with config: &{Name:calico-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:24:02.985814    4388 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:24:02.994306    4388 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:24:03.012095    4388 start.go:159] libmachine.API.Create for "calico-418000" (driver="qemu2")
	I0729 04:24:03.012124    4388 client.go:168] LocalClient.Create starting
	I0729 04:24:03.012195    4388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:24:03.012230    4388 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:03.012238    4388 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:03.012281    4388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:24:03.012304    4388 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:03.012312    4388 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:03.012693    4388 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:24:03.163524    4388 main.go:141] libmachine: Creating SSH key...
	I0729 04:24:03.249118    4388 main.go:141] libmachine: Creating Disk image...
	I0729 04:24:03.249125    4388 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:24:03.249317    4388 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2
	I0729 04:24:03.258822    4388 main.go:141] libmachine: STDOUT: 
	I0729 04:24:03.258849    4388 main.go:141] libmachine: STDERR: 
	I0729 04:24:03.258903    4388 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2 +20000M
	I0729 04:24:03.267253    4388 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:24:03.267268    4388 main.go:141] libmachine: STDERR: 
	I0729 04:24:03.267289    4388 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2
	I0729 04:24:03.267294    4388 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:24:03.267306    4388 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:24:03.267332    4388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:a8:89:a0:e4:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2
	I0729 04:24:03.268991    4388 main.go:141] libmachine: STDOUT: 
	I0729 04:24:03.269006    4388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:24:03.269026    4388 client.go:171] duration metric: took 256.905167ms to LocalClient.Create
	I0729 04:24:05.270841    4388 start.go:128] duration metric: took 2.28507875s to createHost
	I0729 04:24:05.270905    4388 start.go:83] releasing machines lock for "calico-418000", held for 2.285194042s
	W0729 04:24:05.270976    4388 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:05.281740    4388 out.go:177] * Deleting "calico-418000" in qemu2 ...
	W0729 04:24:05.306362    4388 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:05.306388    4388 start.go:729] Will try again in 5 seconds ...
	I0729 04:24:10.308593    4388 start.go:360] acquireMachinesLock for calico-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:24:10.309194    4388 start.go:364] duration metric: took 468.125µs to acquireMachinesLock for "calico-418000"
	I0729 04:24:10.309339    4388 start.go:93] Provisioning new machine with config: &{Name:calico-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:24:10.309598    4388 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:24:10.320137    4388 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:24:10.362025    4388 start.go:159] libmachine.API.Create for "calico-418000" (driver="qemu2")
	I0729 04:24:10.362083    4388 client.go:168] LocalClient.Create starting
	I0729 04:24:10.362184    4388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:24:10.362251    4388 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:10.362266    4388 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:10.362314    4388 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:24:10.362352    4388 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:10.362365    4388 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:10.362929    4388 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:24:10.537826    4388 main.go:141] libmachine: Creating SSH key...
	I0729 04:24:10.650611    4388 main.go:141] libmachine: Creating Disk image...
	I0729 04:24:10.650617    4388 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:24:10.650790    4388 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2
	I0729 04:24:10.659919    4388 main.go:141] libmachine: STDOUT: 
	I0729 04:24:10.659939    4388 main.go:141] libmachine: STDERR: 
	I0729 04:24:10.659986    4388 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2 +20000M
	I0729 04:24:10.667996    4388 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:24:10.668010    4388 main.go:141] libmachine: STDERR: 
	I0729 04:24:10.668021    4388 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2
	I0729 04:24:10.668026    4388 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:24:10.668036    4388 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:24:10.668076    4388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:22:aa:e0:18:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/calico-418000/disk.qcow2
	I0729 04:24:10.669771    4388 main.go:141] libmachine: STDOUT: 
	I0729 04:24:10.669788    4388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:24:10.669801    4388 client.go:171] duration metric: took 307.721333ms to LocalClient.Create
	I0729 04:24:12.670189    4388 start.go:128] duration metric: took 2.360653375s to createHost
	I0729 04:24:12.670201    4388 start.go:83] releasing machines lock for "calico-418000", held for 2.361057083s
	W0729 04:24:12.670282    4388 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:12.677529    4388 out.go:177] 
	W0729 04:24:12.681368    4388 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:24:12.681382    4388 out.go:239] * 
	* 
	W0729 04:24:12.682056    4388 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:24:12.691493    4388 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.935261917s)

                                                
                                                
-- stdout --
	* [custom-flannel-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-418000" primary control-plane node in "custom-flannel-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:24:14.992980    4505 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:24:14.993092    4505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:24:14.993095    4505 out.go:304] Setting ErrFile to fd 2...
	I0729 04:24:14.993098    4505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:24:14.993244    4505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:24:14.994385    4505 out.go:298] Setting JSON to false
	I0729 04:24:15.011449    4505 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3218,"bootTime":1722249037,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:24:15.011518    4505 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:24:15.016956    4505 out.go:177] * [custom-flannel-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:24:15.023836    4505 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:24:15.023896    4505 notify.go:220] Checking for updates...
	I0729 04:24:15.030833    4505 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:24:15.033832    4505 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:24:15.036830    4505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:24:15.039897    4505 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:24:15.042801    4505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:24:15.046130    4505 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:24:15.046195    4505 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:24:15.046251    4505 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:24:15.050820    4505 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:24:15.057817    4505 start.go:297] selected driver: qemu2
	I0729 04:24:15.057823    4505 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:24:15.057829    4505 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:24:15.060310    4505 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:24:15.062778    4505 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:24:15.064043    4505 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:24:15.064061    4505 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 04:24:15.064068    4505 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 04:24:15.064095    4505 start.go:340] cluster config:
	{Name:custom-flannel-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:24:15.067513    4505 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:24:15.074871    4505 out.go:177] * Starting "custom-flannel-418000" primary control-plane node in "custom-flannel-418000" cluster
	I0729 04:24:15.078799    4505 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:24:15.078813    4505 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:24:15.078821    4505 cache.go:56] Caching tarball of preloaded images
	I0729 04:24:15.078877    4505 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:24:15.078882    4505 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:24:15.078928    4505 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/custom-flannel-418000/config.json ...
	I0729 04:24:15.078939    4505 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/custom-flannel-418000/config.json: {Name:mk748993b58f9c836199088d7715d560832dfaeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:24:15.079137    4505 start.go:360] acquireMachinesLock for custom-flannel-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:24:15.079167    4505 start.go:364] duration metric: took 24.541µs to acquireMachinesLock for "custom-flannel-418000"
	I0729 04:24:15.079178    4505 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:24:15.079205    4505 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:24:15.087761    4505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:24:15.102760    4505 start.go:159] libmachine.API.Create for "custom-flannel-418000" (driver="qemu2")
	I0729 04:24:15.102791    4505 client.go:168] LocalClient.Create starting
	I0729 04:24:15.102848    4505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:24:15.102880    4505 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:15.102888    4505 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:15.102935    4505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:24:15.102957    4505 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:15.102964    4505 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:15.103309    4505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:24:15.253696    4505 main.go:141] libmachine: Creating SSH key...
	I0729 04:24:15.429732    4505 main.go:141] libmachine: Creating Disk image...
	I0729 04:24:15.429739    4505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:24:15.429963    4505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2
	I0729 04:24:15.439652    4505 main.go:141] libmachine: STDOUT: 
	I0729 04:24:15.439672    4505 main.go:141] libmachine: STDERR: 
	I0729 04:24:15.439724    4505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2 +20000M
	I0729 04:24:15.447804    4505 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:24:15.447824    4505 main.go:141] libmachine: STDERR: 
	I0729 04:24:15.447839    4505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2
	I0729 04:24:15.447842    4505 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:24:15.447857    4505 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:24:15.447885    4505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:36:19:d1:a3:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2
	I0729 04:24:15.449536    4505 main.go:141] libmachine: STDOUT: 
	I0729 04:24:15.449553    4505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:24:15.449578    4505 client.go:171] duration metric: took 346.793833ms to LocalClient.Create
	I0729 04:24:17.451786    4505 start.go:128] duration metric: took 2.372629s to createHost
	I0729 04:24:17.451888    4505 start.go:83] releasing machines lock for "custom-flannel-418000", held for 2.372788s
	W0729 04:24:17.451986    4505 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:17.471204    4505 out.go:177] * Deleting "custom-flannel-418000" in qemu2 ...
	W0729 04:24:17.498359    4505 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:17.498391    4505 start.go:729] Will try again in 5 seconds ...
	I0729 04:24:22.500488    4505 start.go:360] acquireMachinesLock for custom-flannel-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:24:22.501162    4505 start.go:364] duration metric: took 535.666µs to acquireMachinesLock for "custom-flannel-418000"
	I0729 04:24:22.501240    4505 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:24:22.501548    4505 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:24:22.511006    4505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:24:22.563366    4505 start.go:159] libmachine.API.Create for "custom-flannel-418000" (driver="qemu2")
	I0729 04:24:22.563420    4505 client.go:168] LocalClient.Create starting
	I0729 04:24:22.563552    4505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:24:22.563630    4505 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:22.563648    4505 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:22.563706    4505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:24:22.563753    4505 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:22.563766    4505 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:22.564326    4505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:24:22.724603    4505 main.go:141] libmachine: Creating SSH key...
	I0729 04:24:22.838266    4505 main.go:141] libmachine: Creating Disk image...
	I0729 04:24:22.838278    4505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:24:22.838454    4505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2
	I0729 04:24:22.847713    4505 main.go:141] libmachine: STDOUT: 
	I0729 04:24:22.847740    4505 main.go:141] libmachine: STDERR: 
	I0729 04:24:22.847796    4505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2 +20000M
	I0729 04:24:22.855699    4505 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:24:22.855712    4505 main.go:141] libmachine: STDERR: 
	I0729 04:24:22.855728    4505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2
	I0729 04:24:22.855736    4505 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:24:22.855746    4505 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:24:22.855782    4505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:63:c4:15:d1:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/custom-flannel-418000/disk.qcow2
	I0729 04:24:22.857433    4505 main.go:141] libmachine: STDOUT: 
	I0729 04:24:22.857446    4505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:24:22.857460    4505 client.go:171] duration metric: took 294.044292ms to LocalClient.Create
	I0729 04:24:24.859598    4505 start.go:128] duration metric: took 2.358093542s to createHost
	I0729 04:24:24.859666    4505 start.go:83] releasing machines lock for "custom-flannel-418000", held for 2.358553208s
	W0729 04:24:24.860047    4505 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:24.872621    4505 out.go:177] 
	W0729 04:24:24.876658    4505 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:24:24.876681    4505 out.go:239] * 
	* 
	W0729 04:24:24.879364    4505 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:24:24.887601    4505 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.998247167s)

                                                
                                                
-- stdout --
	* [false-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-418000" primary control-plane node in "false-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:24:27.251937    4622 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:24:27.252067    4622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:24:27.252070    4622 out.go:304] Setting ErrFile to fd 2...
	I0729 04:24:27.252073    4622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:24:27.252221    4622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:24:27.253419    4622 out.go:298] Setting JSON to false
	I0729 04:24:27.270022    4622 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3230,"bootTime":1722249037,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:24:27.270084    4622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:24:27.276960    4622 out.go:177] * [false-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:24:27.283870    4622 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:24:27.283959    4622 notify.go:220] Checking for updates...
	I0729 04:24:27.289231    4622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:24:27.291859    4622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:24:27.294864    4622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:24:27.297899    4622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:24:27.300856    4622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:24:27.304254    4622 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:24:27.304329    4622 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:24:27.304384    4622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:24:27.307833    4622 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:24:27.314827    4622 start.go:297] selected driver: qemu2
	I0729 04:24:27.314839    4622 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:24:27.314846    4622 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:24:27.317178    4622 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:24:27.319869    4622 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:24:27.325105    4622 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:24:27.325144    4622 cni.go:84] Creating CNI manager for "false"
	I0729 04:24:27.325182    4622 start.go:340] cluster config:
	{Name:false-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:24:27.328822    4622 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:24:27.335712    4622 out.go:177] * Starting "false-418000" primary control-plane node in "false-418000" cluster
	I0729 04:24:27.339855    4622 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:24:27.339868    4622 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:24:27.339881    4622 cache.go:56] Caching tarball of preloaded images
	I0729 04:24:27.339936    4622 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:24:27.339941    4622 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:24:27.339995    4622 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/false-418000/config.json ...
	I0729 04:24:27.340005    4622 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/false-418000/config.json: {Name:mk9a29dcda9e2b1adcaa0ccbac0b89415aa3e328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:24:27.340343    4622 start.go:360] acquireMachinesLock for false-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:24:27.340380    4622 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "false-418000"
	I0729 04:24:27.340392    4622 start.go:93] Provisioning new machine with config: &{Name:false-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:24:27.340418    4622 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:24:27.343858    4622 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:24:27.360601    4622 start.go:159] libmachine.API.Create for "false-418000" (driver="qemu2")
	I0729 04:24:27.360621    4622 client.go:168] LocalClient.Create starting
	I0729 04:24:27.360678    4622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:24:27.360707    4622 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:27.360716    4622 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:27.360756    4622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:24:27.360784    4622 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:27.360791    4622 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:27.361164    4622 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:24:27.513370    4622 main.go:141] libmachine: Creating SSH key...
	I0729 04:24:27.696551    4622 main.go:141] libmachine: Creating Disk image...
	I0729 04:24:27.696564    4622 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:24:27.696766    4622 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2
	I0729 04:24:27.706403    4622 main.go:141] libmachine: STDOUT: 
	I0729 04:24:27.706422    4622 main.go:141] libmachine: STDERR: 
	I0729 04:24:27.706484    4622 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2 +20000M
	I0729 04:24:27.714657    4622 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:24:27.714669    4622 main.go:141] libmachine: STDERR: 
	I0729 04:24:27.714681    4622 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2
	I0729 04:24:27.714687    4622 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:24:27.714700    4622 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:24:27.714730    4622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:5a:1c:d6:e3:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2
	I0729 04:24:27.716379    4622 main.go:141] libmachine: STDOUT: 
	I0729 04:24:27.716394    4622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:24:27.716413    4622 client.go:171] duration metric: took 355.800084ms to LocalClient.Create
	I0729 04:24:29.718563    4622 start.go:128] duration metric: took 2.378195083s to createHost
	I0729 04:24:29.718646    4622 start.go:83] releasing machines lock for "false-418000", held for 2.378332791s
	W0729 04:24:29.718766    4622 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:29.730100    4622 out.go:177] * Deleting "false-418000" in qemu2 ...
	W0729 04:24:29.757837    4622 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:29.757872    4622 start.go:729] Will try again in 5 seconds ...
	I0729 04:24:34.759907    4622 start.go:360] acquireMachinesLock for false-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:24:34.760455    4622 start.go:364] duration metric: took 391.209µs to acquireMachinesLock for "false-418000"
	I0729 04:24:34.760565    4622 start.go:93] Provisioning new machine with config: &{Name:false-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:24:34.760776    4622 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:24:34.770303    4622 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:24:34.813691    4622 start.go:159] libmachine.API.Create for "false-418000" (driver="qemu2")
	I0729 04:24:34.813746    4622 client.go:168] LocalClient.Create starting
	I0729 04:24:34.813851    4622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:24:34.813917    4622 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:34.813933    4622 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:34.813991    4622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:24:34.814037    4622 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:34.814049    4622 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:34.814727    4622 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:24:34.975516    4622 main.go:141] libmachine: Creating SSH key...
	I0729 04:24:35.157167    4622 main.go:141] libmachine: Creating Disk image...
	I0729 04:24:35.157180    4622 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:24:35.157358    4622 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2
	I0729 04:24:35.167042    4622 main.go:141] libmachine: STDOUT: 
	I0729 04:24:35.167059    4622 main.go:141] libmachine: STDERR: 
	I0729 04:24:35.167111    4622 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2 +20000M
	I0729 04:24:35.175034    4622 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:24:35.175047    4622 main.go:141] libmachine: STDERR: 
	I0729 04:24:35.175059    4622 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2
	I0729 04:24:35.175063    4622 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:24:35.175074    4622 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:24:35.175116    4622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:73:72:90:b1:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/false-418000/disk.qcow2
	I0729 04:24:35.176874    4622 main.go:141] libmachine: STDOUT: 
	I0729 04:24:35.176888    4622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:24:35.176902    4622 client.go:171] duration metric: took 363.159083ms to LocalClient.Create
	I0729 04:24:37.179147    4622 start.go:128] duration metric: took 2.418401292s to createHost
	I0729 04:24:37.179230    4622 start.go:83] releasing machines lock for "false-418000", held for 2.418832292s
	W0729 04:24:37.179612    4622 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:37.193453    4622 out.go:177] 
	W0729 04:24:37.196512    4622 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:24:37.196543    4622 out.go:239] * 
	* 
	W0729 04:24:37.199060    4622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:24:37.207354    4622 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.76087725s)

                                                
                                                
-- stdout --
	* [kindnet-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-418000" primary control-plane node in "kindnet-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:24:39.400300    4731 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:24:39.400429    4731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:24:39.400432    4731 out.go:304] Setting ErrFile to fd 2...
	I0729 04:24:39.400435    4731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:24:39.400591    4731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:24:39.401650    4731 out.go:298] Setting JSON to false
	I0729 04:24:39.417975    4731 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3242,"bootTime":1722249037,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:24:39.418075    4731 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:24:39.422502    4731 out.go:177] * [kindnet-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:24:39.430441    4731 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:24:39.430567    4731 notify.go:220] Checking for updates...
	I0729 04:24:39.435628    4731 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:24:39.438349    4731 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:24:39.441452    4731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:24:39.444442    4731 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:24:39.447432    4731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:24:39.450788    4731 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:24:39.450849    4731 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:24:39.450887    4731 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:24:39.455427    4731 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:24:39.462620    4731 start.go:297] selected driver: qemu2
	I0729 04:24:39.462629    4731 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:24:39.462638    4731 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:24:39.464786    4731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:24:39.467433    4731 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:24:39.470466    4731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:24:39.470508    4731 cni.go:84] Creating CNI manager for "kindnet"
	I0729 04:24:39.470512    4731 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 04:24:39.470547    4731 start.go:340] cluster config:
	{Name:kindnet-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:24:39.474071    4731 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:24:39.481246    4731 out.go:177] * Starting "kindnet-418000" primary control-plane node in "kindnet-418000" cluster
	I0729 04:24:39.485433    4731 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:24:39.485448    4731 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:24:39.485457    4731 cache.go:56] Caching tarball of preloaded images
	I0729 04:24:39.485522    4731 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:24:39.485528    4731 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:24:39.485592    4731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/kindnet-418000/config.json ...
	I0729 04:24:39.485604    4731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/kindnet-418000/config.json: {Name:mk1da7d07be1cbe0485b386aced5ce85bd559ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:24:39.485945    4731 start.go:360] acquireMachinesLock for kindnet-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:24:39.485977    4731 start.go:364] duration metric: took 26.542µs to acquireMachinesLock for "kindnet-418000"
	I0729 04:24:39.485987    4731 start.go:93] Provisioning new machine with config: &{Name:kindnet-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:24:39.486018    4731 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:24:39.493417    4731 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:24:39.509808    4731 start.go:159] libmachine.API.Create for "kindnet-418000" (driver="qemu2")
	I0729 04:24:39.509831    4731 client.go:168] LocalClient.Create starting
	I0729 04:24:39.509898    4731 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:24:39.509928    4731 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:39.509936    4731 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:39.509977    4731 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:24:39.510001    4731 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:39.510010    4731 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:39.510465    4731 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:24:39.660734    4731 main.go:141] libmachine: Creating SSH key...
	I0729 04:24:39.733599    4731 main.go:141] libmachine: Creating Disk image...
	I0729 04:24:39.733604    4731 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:24:39.733797    4731 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2
	I0729 04:24:39.742817    4731 main.go:141] libmachine: STDOUT: 
	I0729 04:24:39.742836    4731 main.go:141] libmachine: STDERR: 
	I0729 04:24:39.742888    4731 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2 +20000M
	I0729 04:24:39.751037    4731 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:24:39.751053    4731 main.go:141] libmachine: STDERR: 
	I0729 04:24:39.751066    4731 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2
	I0729 04:24:39.751074    4731 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:24:39.751086    4731 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:24:39.751116    4731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:63:31:27:30:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2
	I0729 04:24:39.752869    4731 main.go:141] libmachine: STDOUT: 
	I0729 04:24:39.752884    4731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:24:39.752901    4731 client.go:171] duration metric: took 243.073458ms to LocalClient.Create
	I0729 04:24:41.755013    4731 start.go:128] duration metric: took 2.269044667s to createHost
	I0729 04:24:41.755076    4731 start.go:83] releasing machines lock for "kindnet-418000", held for 2.269166792s
	W0729 04:24:41.755174    4731 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:41.765753    4731 out.go:177] * Deleting "kindnet-418000" in qemu2 ...
	W0729 04:24:41.788210    4731 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:41.788240    4731 start.go:729] Will try again in 5 seconds ...
	I0729 04:24:46.790297    4731 start.go:360] acquireMachinesLock for kindnet-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:24:46.790807    4731 start.go:364] duration metric: took 381.959µs to acquireMachinesLock for "kindnet-418000"
	I0729 04:24:46.790875    4731 start.go:93] Provisioning new machine with config: &{Name:kindnet-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:24:46.791116    4731 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:24:46.799628    4731 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:24:46.844972    4731 start.go:159] libmachine.API.Create for "kindnet-418000" (driver="qemu2")
	I0729 04:24:46.845033    4731 client.go:168] LocalClient.Create starting
	I0729 04:24:46.845170    4731 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:24:46.845238    4731 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:46.845254    4731 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:46.845317    4731 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:24:46.845368    4731 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:46.845386    4731 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:46.845873    4731 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:24:47.004637    4731 main.go:141] libmachine: Creating SSH key...
	I0729 04:24:47.071346    4731 main.go:141] libmachine: Creating Disk image...
	I0729 04:24:47.071351    4731 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:24:47.071542    4731 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2
	I0729 04:24:47.081164    4731 main.go:141] libmachine: STDOUT: 
	I0729 04:24:47.081184    4731 main.go:141] libmachine: STDERR: 
	I0729 04:24:47.081246    4731 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2 +20000M
	I0729 04:24:47.089209    4731 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:24:47.089224    4731 main.go:141] libmachine: STDERR: 
	I0729 04:24:47.089236    4731 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2
	I0729 04:24:47.089242    4731 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:24:47.089252    4731 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:24:47.089287    4731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:3d:6c:e4:99:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kindnet-418000/disk.qcow2
	I0729 04:24:47.091058    4731 main.go:141] libmachine: STDOUT: 
	I0729 04:24:47.091072    4731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:24:47.091084    4731 client.go:171] duration metric: took 246.053125ms to LocalClient.Create
	I0729 04:24:49.093214    4731 start.go:128] duration metric: took 2.302114833s to createHost
	I0729 04:24:49.093281    4731 start.go:83] releasing machines lock for "kindnet-418000", held for 2.302521625s
	W0729 04:24:49.093721    4731 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:49.103257    4731 out.go:177] 
	W0729 04:24:49.109345    4731 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:24:49.109464    4731 out.go:239] * 
	* 
	W0729 04:24:49.112155    4731 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:24:49.120220    4731 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.860943875s)

                                                
                                                
-- stdout --
	* [flannel-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-418000" primary control-plane node in "flannel-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:24:51.415607    4849 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:24:51.415725    4849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:24:51.415730    4849 out.go:304] Setting ErrFile to fd 2...
	I0729 04:24:51.415732    4849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:24:51.415888    4849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:24:51.417005    4849 out.go:298] Setting JSON to false
	I0729 04:24:51.433184    4849 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3254,"bootTime":1722249037,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:24:51.433254    4849 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:24:51.440262    4849 out.go:177] * [flannel-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:24:51.447085    4849 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:24:51.447152    4849 notify.go:220] Checking for updates...
	I0729 04:24:51.454238    4849 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:24:51.455567    4849 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:24:51.458178    4849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:24:51.461212    4849 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:24:51.464246    4849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:24:51.467573    4849 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:24:51.467638    4849 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:24:51.467675    4849 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:24:51.472138    4849 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:24:51.479197    4849 start.go:297] selected driver: qemu2
	I0729 04:24:51.479204    4849 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:24:51.479210    4849 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:24:51.481512    4849 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:24:51.484143    4849 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:24:51.487315    4849 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:24:51.487334    4849 cni.go:84] Creating CNI manager for "flannel"
	I0729 04:24:51.487338    4849 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0729 04:24:51.487372    4849 start.go:340] cluster config:
	{Name:flannel-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:24:51.490896    4849 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:24:51.498210    4849 out.go:177] * Starting "flannel-418000" primary control-plane node in "flannel-418000" cluster
	I0729 04:24:51.501141    4849 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:24:51.501158    4849 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:24:51.501173    4849 cache.go:56] Caching tarball of preloaded images
	I0729 04:24:51.501245    4849 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:24:51.501250    4849 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:24:51.501314    4849 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/flannel-418000/config.json ...
	I0729 04:24:51.501326    4849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/flannel-418000/config.json: {Name:mk032b03b492b06f09b119b5f130bb719cf8aa31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:24:51.501671    4849 start.go:360] acquireMachinesLock for flannel-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:24:51.501704    4849 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "flannel-418000"
	I0729 04:24:51.501715    4849 start.go:93] Provisioning new machine with config: &{Name:flannel-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:24:51.501741    4849 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:24:51.509036    4849 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:24:51.525836    4849 start.go:159] libmachine.API.Create for "flannel-418000" (driver="qemu2")
	I0729 04:24:51.525862    4849 client.go:168] LocalClient.Create starting
	I0729 04:24:51.525925    4849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:24:51.525955    4849 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:51.525965    4849 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:51.526007    4849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:24:51.526030    4849 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:51.526038    4849 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:51.526451    4849 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:24:51.678676    4849 main.go:141] libmachine: Creating SSH key...
	I0729 04:24:51.763744    4849 main.go:141] libmachine: Creating Disk image...
	I0729 04:24:51.763757    4849 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:24:51.763978    4849 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2
	I0729 04:24:51.773923    4849 main.go:141] libmachine: STDOUT: 
	I0729 04:24:51.773947    4849 main.go:141] libmachine: STDERR: 
	I0729 04:24:51.774010    4849 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2 +20000M
	I0729 04:24:51.783235    4849 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:24:51.783254    4849 main.go:141] libmachine: STDERR: 
	I0729 04:24:51.783277    4849 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2
	I0729 04:24:51.783282    4849 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:24:51.783295    4849 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:24:51.783322    4849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:6f:44:43:44:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2
	I0729 04:24:51.785278    4849 main.go:141] libmachine: STDOUT: 
	I0729 04:24:51.785296    4849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:24:51.785317    4849 client.go:171] duration metric: took 259.459417ms to LocalClient.Create
	I0729 04:24:53.787465    4849 start.go:128] duration metric: took 2.285765833s to createHost
	I0729 04:24:53.787567    4849 start.go:83] releasing machines lock for "flannel-418000", held for 2.285926375s
	W0729 04:24:53.787707    4849 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:53.798936    4849 out.go:177] * Deleting "flannel-418000" in qemu2 ...
	W0729 04:24:53.827594    4849 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:24:53.827626    4849 start.go:729] Will try again in 5 seconds ...
	I0729 04:24:58.829642    4849 start.go:360] acquireMachinesLock for flannel-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:24:58.830179    4849 start.go:364] duration metric: took 458.625µs to acquireMachinesLock for "flannel-418000"
	I0729 04:24:58.830369    4849 start.go:93] Provisioning new machine with config: &{Name:flannel-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:24:58.830670    4849 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:24:58.836260    4849 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:24:58.886668    4849 start.go:159] libmachine.API.Create for "flannel-418000" (driver="qemu2")
	I0729 04:24:58.886715    4849 client.go:168] LocalClient.Create starting
	I0729 04:24:58.886854    4849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:24:58.886923    4849 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:58.886939    4849 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:58.887001    4849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:24:58.887045    4849 main.go:141] libmachine: Decoding PEM data...
	I0729 04:24:58.887055    4849 main.go:141] libmachine: Parsing certificate...
	I0729 04:24:58.887889    4849 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:24:59.048566    4849 main.go:141] libmachine: Creating SSH key...
	I0729 04:24:59.188478    4849 main.go:141] libmachine: Creating Disk image...
	I0729 04:24:59.188486    4849 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:24:59.188701    4849 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2
	I0729 04:24:59.197922    4849 main.go:141] libmachine: STDOUT: 
	I0729 04:24:59.197949    4849 main.go:141] libmachine: STDERR: 
	I0729 04:24:59.198004    4849 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2 +20000M
	I0729 04:24:59.205961    4849 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:24:59.205983    4849 main.go:141] libmachine: STDERR: 
	I0729 04:24:59.206001    4849 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2
	I0729 04:24:59.206006    4849 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:24:59.206017    4849 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:24:59.206046    4849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:71:de:11:cc:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/flannel-418000/disk.qcow2
	I0729 04:24:59.207706    4849 main.go:141] libmachine: STDOUT: 
	I0729 04:24:59.207728    4849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:24:59.207740    4849 client.go:171] duration metric: took 321.031ms to LocalClient.Create
	I0729 04:25:01.209956    4849 start.go:128] duration metric: took 2.379313125s to createHost
	I0729 04:25:01.210046    4849 start.go:83] releasing machines lock for "flannel-418000", held for 2.379918791s
	W0729 04:25:01.210456    4849 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:01.217996    4849 out.go:177] 
	W0729 04:25:01.224107    4849 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:25:01.224152    4849 out.go:239] * 
	* 
	W0729 04:25:01.226946    4849 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:25:01.237034    4849 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.824634583s)

                                                
                                                
-- stdout --
	* [enable-default-cni-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-418000" primary control-plane node in "enable-default-cni-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:03.603882    4966 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:03.604036    4966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:03.604039    4966 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:03.604041    4966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:03.604173    4966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:25:03.605222    4966 out.go:298] Setting JSON to false
	I0729 04:25:03.622361    4966 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3266,"bootTime":1722249037,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:25:03.622510    4966 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:25:03.630239    4966 out.go:177] * [enable-default-cni-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:25:03.638046    4966 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:25:03.638072    4966 notify.go:220] Checking for updates...
	I0729 04:25:03.646966    4966 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:25:03.651040    4966 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:25:03.654914    4966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:25:03.659029    4966 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:25:03.663047    4966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:25:03.667284    4966 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:03.667351    4966 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:25:03.667401    4966 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:25:03.671026    4966 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:25:03.678938    4966 start.go:297] selected driver: qemu2
	I0729 04:25:03.678945    4966 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:25:03.678950    4966 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:25:03.681130    4966 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:25:03.685048    4966 out.go:177] * Automatically selected the socket_vmnet network
	E0729 04:25:03.689126    4966 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0729 04:25:03.689143    4966 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:25:03.689166    4966 cni.go:84] Creating CNI manager for "bridge"
	I0729 04:25:03.689174    4966 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:25:03.689209    4966 start.go:340] cluster config:
	{Name:enable-default-cni-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:25:03.692673    4966 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:03.701093    4966 out.go:177] * Starting "enable-default-cni-418000" primary control-plane node in "enable-default-cni-418000" cluster
	I0729 04:25:03.704983    4966 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:25:03.704995    4966 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:25:03.705004    4966 cache.go:56] Caching tarball of preloaded images
	I0729 04:25:03.705061    4966 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:25:03.705067    4966 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:25:03.705125    4966 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/enable-default-cni-418000/config.json ...
	I0729 04:25:03.705136    4966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/enable-default-cni-418000/config.json: {Name:mkc3422b8b7a8ed4f798074b67eb6cc397e18f0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:25:03.705346    4966 start.go:360] acquireMachinesLock for enable-default-cni-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:03.705383    4966 start.go:364] duration metric: took 27.209µs to acquireMachinesLock for "enable-default-cni-418000"
	I0729 04:25:03.705395    4966 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:25:03.705425    4966 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:25:03.712992    4966 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:25:03.728780    4966 start.go:159] libmachine.API.Create for "enable-default-cni-418000" (driver="qemu2")
	I0729 04:25:03.728818    4966 client.go:168] LocalClient.Create starting
	I0729 04:25:03.728874    4966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:25:03.728910    4966 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:03.728919    4966 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:03.728954    4966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:25:03.728981    4966 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:03.728989    4966 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:03.729326    4966 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:25:03.881028    4966 main.go:141] libmachine: Creating SSH key...
	I0729 04:25:03.948076    4966 main.go:141] libmachine: Creating Disk image...
	I0729 04:25:03.948082    4966 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:25:03.948270    4966 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2
	I0729 04:25:03.957411    4966 main.go:141] libmachine: STDOUT: 
	I0729 04:25:03.957430    4966 main.go:141] libmachine: STDERR: 
	I0729 04:25:03.957491    4966 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2 +20000M
	I0729 04:25:03.965512    4966 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:25:03.965534    4966 main.go:141] libmachine: STDERR: 
	I0729 04:25:03.965550    4966 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2
	I0729 04:25:03.965558    4966 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:25:03.965567    4966 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:03.965593    4966 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:8b:d9:98:d8:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2
	I0729 04:25:03.967188    4966 main.go:141] libmachine: STDOUT: 
	I0729 04:25:03.967204    4966 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:03.967223    4966 client.go:171] duration metric: took 238.408875ms to LocalClient.Create
	I0729 04:25:05.969372    4966 start.go:128] duration metric: took 2.263989542s to createHost
	I0729 04:25:05.969456    4966 start.go:83] releasing machines lock for "enable-default-cni-418000", held for 2.264137125s
	W0729 04:25:05.969601    4966 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:05.977013    4966 out.go:177] * Deleting "enable-default-cni-418000" in qemu2 ...
	W0729 04:25:06.004011    4966 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:06.004052    4966 start.go:729] Will try again in 5 seconds ...
	I0729 04:25:11.006234    4966 start.go:360] acquireMachinesLock for enable-default-cni-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:11.006849    4966 start.go:364] duration metric: took 477.417µs to acquireMachinesLock for "enable-default-cni-418000"
	I0729 04:25:11.006936    4966 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:25:11.007278    4966 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:25:11.014763    4966 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:25:11.061797    4966 start.go:159] libmachine.API.Create for "enable-default-cni-418000" (driver="qemu2")
	I0729 04:25:11.061857    4966 client.go:168] LocalClient.Create starting
	I0729 04:25:11.061992    4966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:25:11.062052    4966 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:11.062065    4966 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:11.062140    4966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:25:11.062192    4966 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:11.062203    4966 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:11.062757    4966 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:25:11.222720    4966 main.go:141] libmachine: Creating SSH key...
	I0729 04:25:11.337960    4966 main.go:141] libmachine: Creating Disk image...
	I0729 04:25:11.337969    4966 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:25:11.338156    4966 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2
	I0729 04:25:11.347474    4966 main.go:141] libmachine: STDOUT: 
	I0729 04:25:11.347492    4966 main.go:141] libmachine: STDERR: 
	I0729 04:25:11.347556    4966 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2 +20000M
	I0729 04:25:11.355418    4966 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:25:11.355433    4966 main.go:141] libmachine: STDERR: 
	I0729 04:25:11.355452    4966 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2
	I0729 04:25:11.355456    4966 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:25:11.355466    4966 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:11.355493    4966 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:fa:23:a7:3e:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/enable-default-cni-418000/disk.qcow2
	I0729 04:25:11.357134    4966 main.go:141] libmachine: STDOUT: 
	I0729 04:25:11.357148    4966 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:11.357161    4966 client.go:171] duration metric: took 295.306417ms to LocalClient.Create
	I0729 04:25:13.359315    4966 start.go:128] duration metric: took 2.352067333s to createHost
	I0729 04:25:13.359389    4966 start.go:83] releasing machines lock for "enable-default-cni-418000", held for 2.352589292s
	W0729 04:25:13.359803    4966 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:13.374505    4966 out.go:177] 
	W0729 04:25:13.378706    4966 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:25:13.378744    4966 out.go:239] * 
	* 
	W0729 04:25:13.381462    4966 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:25:13.392355    4966 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.895056125s)

                                                
                                                
-- stdout --
	* [bridge-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-418000" primary control-plane node in "bridge-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:15.560546    5078 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:15.560681    5078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:15.560685    5078 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:15.560688    5078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:15.560833    5078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:25:15.562104    5078 out.go:298] Setting JSON to false
	I0729 04:25:15.580450    5078 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3278,"bootTime":1722249037,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:25:15.580564    5078 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:25:15.585447    5078 out.go:177] * [bridge-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:25:15.592564    5078 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:25:15.592676    5078 notify.go:220] Checking for updates...
	I0729 04:25:15.599423    5078 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:25:15.602486    5078 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:25:15.605366    5078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:25:15.608404    5078 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:25:15.611487    5078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:25:15.614758    5078 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:15.614825    5078 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:25:15.614882    5078 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:25:15.618447    5078 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:25:15.624395    5078 start.go:297] selected driver: qemu2
	I0729 04:25:15.624401    5078 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:25:15.624408    5078 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:25:15.627255    5078 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:25:15.630390    5078 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:25:15.633514    5078 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:25:15.633543    5078 cni.go:84] Creating CNI manager for "bridge"
	I0729 04:25:15.633547    5078 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:25:15.633568    5078 start.go:340] cluster config:
	{Name:bridge-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:25:15.637351    5078 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:15.644421    5078 out.go:177] * Starting "bridge-418000" primary control-plane node in "bridge-418000" cluster
	I0729 04:25:15.648452    5078 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:25:15.648468    5078 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:25:15.648513    5078 cache.go:56] Caching tarball of preloaded images
	I0729 04:25:15.648572    5078 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:25:15.648577    5078 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:25:15.648632    5078 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/bridge-418000/config.json ...
	I0729 04:25:15.648642    5078 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/bridge-418000/config.json: {Name:mk975d011934ec97c4ff013fb359124a1893a650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:25:15.648849    5078 start.go:360] acquireMachinesLock for bridge-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:15.648880    5078 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "bridge-418000"
	I0729 04:25:15.648891    5078 start.go:93] Provisioning new machine with config: &{Name:bridge-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:25:15.648928    5078 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:25:15.657472    5078 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:25:15.673945    5078 start.go:159] libmachine.API.Create for "bridge-418000" (driver="qemu2")
	I0729 04:25:15.673972    5078 client.go:168] LocalClient.Create starting
	I0729 04:25:15.674049    5078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:25:15.674082    5078 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:15.674091    5078 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:15.674138    5078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:25:15.674162    5078 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:15.674171    5078 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:15.674546    5078 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:25:15.826378    5078 main.go:141] libmachine: Creating SSH key...
	I0729 04:25:16.022229    5078 main.go:141] libmachine: Creating Disk image...
	I0729 04:25:16.022238    5078 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:25:16.022433    5078 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2
	I0729 04:25:16.032010    5078 main.go:141] libmachine: STDOUT: 
	I0729 04:25:16.032028    5078 main.go:141] libmachine: STDERR: 
	I0729 04:25:16.032085    5078 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2 +20000M
	I0729 04:25:16.040212    5078 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:25:16.040225    5078 main.go:141] libmachine: STDERR: 
	I0729 04:25:16.040238    5078 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2
	I0729 04:25:16.040242    5078 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:25:16.040254    5078 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:16.040283    5078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ca:ee:10:a0:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2
	I0729 04:25:16.041965    5078 main.go:141] libmachine: STDOUT: 
	I0729 04:25:16.041977    5078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:16.042000    5078 client.go:171] duration metric: took 368.035042ms to LocalClient.Create
	I0729 04:25:18.044148    5078 start.go:128] duration metric: took 2.395271041s to createHost
	I0729 04:25:18.044270    5078 start.go:83] releasing machines lock for "bridge-418000", held for 2.395458792s
	W0729 04:25:18.044336    5078 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:18.059588    5078 out.go:177] * Deleting "bridge-418000" in qemu2 ...
	W0729 04:25:18.085610    5078 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:18.085646    5078 start.go:729] Will try again in 5 seconds ...
	I0729 04:25:23.087691    5078 start.go:360] acquireMachinesLock for bridge-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:23.088232    5078 start.go:364] duration metric: took 386.917µs to acquireMachinesLock for "bridge-418000"
	I0729 04:25:23.088387    5078 start.go:93] Provisioning new machine with config: &{Name:bridge-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:25:23.088621    5078 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:25:23.094263    5078 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:25:23.144684    5078 start.go:159] libmachine.API.Create for "bridge-418000" (driver="qemu2")
	I0729 04:25:23.144774    5078 client.go:168] LocalClient.Create starting
	I0729 04:25:23.144958    5078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:25:23.145029    5078 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:23.145048    5078 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:23.145109    5078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:25:23.145153    5078 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:23.145165    5078 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:23.145691    5078 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:25:23.306664    5078 main.go:141] libmachine: Creating SSH key...
	I0729 04:25:23.362609    5078 main.go:141] libmachine: Creating Disk image...
	I0729 04:25:23.362619    5078 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:25:23.362835    5078 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2
	I0729 04:25:23.373179    5078 main.go:141] libmachine: STDOUT: 
	I0729 04:25:23.373203    5078 main.go:141] libmachine: STDERR: 
	I0729 04:25:23.373297    5078 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2 +20000M
	I0729 04:25:23.382339    5078 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:25:23.382362    5078 main.go:141] libmachine: STDERR: 
	I0729 04:25:23.382385    5078 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2
	I0729 04:25:23.382390    5078 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:25:23.382403    5078 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:23.382432    5078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:be:0c:77:33:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/bridge-418000/disk.qcow2
	I0729 04:25:23.384536    5078 main.go:141] libmachine: STDOUT: 
	I0729 04:25:23.384552    5078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:23.384564    5078 client.go:171] duration metric: took 239.78ms to LocalClient.Create
	I0729 04:25:25.386724    5078 start.go:128] duration metric: took 2.298143209s to createHost
	I0729 04:25:25.386848    5078 start.go:83] releasing machines lock for "bridge-418000", held for 2.298665333s
	W0729 04:25:25.387329    5078 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:25.397008    5078 out.go:177] 
	W0729 04:25:25.403124    5078 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:25:25.403155    5078 out.go:239] * 
	* 
	W0729 04:25:25.406087    5078 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:25:25.412024    5078 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-418000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.892818916s)

                                                
                                                
-- stdout --
	* [kubenet-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-418000" primary control-plane node in "kubenet-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:27.579467    5187 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:27.579589    5187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:27.579592    5187 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:27.579595    5187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:27.579719    5187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:25:27.580812    5187 out.go:298] Setting JSON to false
	I0729 04:25:27.597094    5187 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3290,"bootTime":1722249037,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:25:27.597158    5187 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:25:27.602793    5187 out.go:177] * [kubenet-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:25:27.610622    5187 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:25:27.610711    5187 notify.go:220] Checking for updates...
	I0729 04:25:27.617743    5187 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:25:27.619121    5187 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:25:27.621726    5187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:25:27.624718    5187 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:25:27.627731    5187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:25:27.631192    5187 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:27.631257    5187 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:25:27.631302    5187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:25:27.635749    5187 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:25:27.642742    5187 start.go:297] selected driver: qemu2
	I0729 04:25:27.642747    5187 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:25:27.642753    5187 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:25:27.644929    5187 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:25:27.647760    5187 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:25:27.650788    5187 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:25:27.650802    5187 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0729 04:25:27.650832    5187 start.go:340] cluster config:
	{Name:kubenet-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:25:27.654296    5187 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:27.661762    5187 out.go:177] * Starting "kubenet-418000" primary control-plane node in "kubenet-418000" cluster
	I0729 04:25:27.664661    5187 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:25:27.664676    5187 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:25:27.664688    5187 cache.go:56] Caching tarball of preloaded images
	I0729 04:25:27.664746    5187 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:25:27.664759    5187 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:25:27.664819    5187 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/kubenet-418000/config.json ...
	I0729 04:25:27.664841    5187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/kubenet-418000/config.json: {Name:mk885430559ba5c8be24eb47f3a43214c39bedc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:25:27.665183    5187 start.go:360] acquireMachinesLock for kubenet-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:27.665214    5187 start.go:364] duration metric: took 25.833µs to acquireMachinesLock for "kubenet-418000"
	I0729 04:25:27.665226    5187 start.go:93] Provisioning new machine with config: &{Name:kubenet-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:25:27.665250    5187 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:25:27.668754    5187 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:25:27.684087    5187 start.go:159] libmachine.API.Create for "kubenet-418000" (driver="qemu2")
	I0729 04:25:27.684118    5187 client.go:168] LocalClient.Create starting
	I0729 04:25:27.684173    5187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:25:27.684203    5187 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:27.684210    5187 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:27.684251    5187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:25:27.684276    5187 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:27.684284    5187 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:27.684714    5187 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:25:27.836300    5187 main.go:141] libmachine: Creating SSH key...
	I0729 04:25:28.065457    5187 main.go:141] libmachine: Creating Disk image...
	I0729 04:25:28.065467    5187 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:25:28.065680    5187 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2
	I0729 04:25:28.075417    5187 main.go:141] libmachine: STDOUT: 
	I0729 04:25:28.075440    5187 main.go:141] libmachine: STDERR: 
	I0729 04:25:28.075497    5187 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2 +20000M
	I0729 04:25:28.083768    5187 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:25:28.083782    5187 main.go:141] libmachine: STDERR: 
	I0729 04:25:28.083794    5187 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2
	I0729 04:25:28.083800    5187 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:25:28.083813    5187 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:28.083839    5187 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:dd:3a:0e:45:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2
	I0729 04:25:28.085567    5187 main.go:141] libmachine: STDOUT: 
	I0729 04:25:28.085585    5187 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:28.085607    5187 client.go:171] duration metric: took 401.497541ms to LocalClient.Create
	I0729 04:25:30.087872    5187 start.go:128] duration metric: took 2.422648459s to createHost
	I0729 04:25:30.087980    5187 start.go:83] releasing machines lock for "kubenet-418000", held for 2.422835s
	W0729 04:25:30.088032    5187 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:30.094539    5187 out.go:177] * Deleting "kubenet-418000" in qemu2 ...
	W0729 04:25:30.122237    5187 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:30.122267    5187 start.go:729] Will try again in 5 seconds ...
	I0729 04:25:35.124282    5187 start.go:360] acquireMachinesLock for kubenet-418000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:35.124894    5187 start.go:364] duration metric: took 518.291µs to acquireMachinesLock for "kubenet-418000"
	I0729 04:25:35.125057    5187 start.go:93] Provisioning new machine with config: &{Name:kubenet-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:25:35.125360    5187 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:25:35.134082    5187 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:25:35.182862    5187 start.go:159] libmachine.API.Create for "kubenet-418000" (driver="qemu2")
	I0729 04:25:35.182948    5187 client.go:168] LocalClient.Create starting
	I0729 04:25:35.183131    5187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:25:35.183204    5187 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:35.183223    5187 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:35.183286    5187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:25:35.183331    5187 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:35.183341    5187 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:35.183962    5187 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:25:35.344558    5187 main.go:141] libmachine: Creating SSH key...
	I0729 04:25:35.380580    5187 main.go:141] libmachine: Creating Disk image...
	I0729 04:25:35.380585    5187 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:25:35.380779    5187 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2
	I0729 04:25:35.390060    5187 main.go:141] libmachine: STDOUT: 
	I0729 04:25:35.390077    5187 main.go:141] libmachine: STDERR: 
	I0729 04:25:35.390125    5187 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2 +20000M
	I0729 04:25:35.398347    5187 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:25:35.398369    5187 main.go:141] libmachine: STDERR: 
	I0729 04:25:35.398381    5187 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2
	I0729 04:25:35.398389    5187 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:25:35.398404    5187 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:35.398433    5187 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:40:b0:73:f2:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/kubenet-418000/disk.qcow2
	I0729 04:25:35.400243    5187 main.go:141] libmachine: STDOUT: 
	I0729 04:25:35.400259    5187 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:35.400272    5187 client.go:171] duration metric: took 217.312209ms to LocalClient.Create
	I0729 04:25:37.402414    5187 start.go:128] duration metric: took 2.277063416s to createHost
	I0729 04:25:37.402529    5187 start.go:83] releasing machines lock for "kubenet-418000", held for 2.277673833s
	W0729 04:25:37.402981    5187 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:37.417616    5187 out.go:177] 
	W0729 04:25:37.420644    5187 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:25:37.420672    5187 out.go:239] * 
	* 
	W0729 04:25:37.423359    5187 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:25:37.431528    5187 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-993000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-993000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.045378625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-993000" primary control-plane node in "old-k8s-version-993000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-993000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:39.730268    5298 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:39.730409    5298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:39.730412    5298 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:39.730415    5298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:39.730550    5298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:25:39.731932    5298 out.go:298] Setting JSON to false
	I0729 04:25:39.750015    5298 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3302,"bootTime":1722249037,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:25:39.750081    5298 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:25:39.760776    5298 out.go:177] * [old-k8s-version-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:25:39.766822    5298 notify.go:220] Checking for updates...
	I0729 04:25:39.770863    5298 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:25:39.778682    5298 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:25:39.785814    5298 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:25:39.792813    5298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:25:39.795818    5298 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:25:39.798772    5298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:25:39.802151    5298 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:39.802217    5298 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:25:39.802265    5298 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:25:39.809759    5298 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:25:39.817721    5298 start.go:297] selected driver: qemu2
	I0729 04:25:39.817726    5298 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:25:39.817732    5298 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:25:39.820050    5298 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:25:39.827811    5298 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:25:39.834933    5298 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:25:39.834970    5298 cni.go:84] Creating CNI manager for ""
	I0729 04:25:39.834980    5298 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:25:39.835021    5298 start.go:340] cluster config:
	{Name:old-k8s-version-993000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:25:39.838959    5298 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:39.845776    5298 out.go:177] * Starting "old-k8s-version-993000" primary control-plane node in "old-k8s-version-993000" cluster
	I0729 04:25:39.849771    5298 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:25:39.849793    5298 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:25:39.849806    5298 cache.go:56] Caching tarball of preloaded images
	I0729 04:25:39.849875    5298 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:25:39.849881    5298 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:25:39.849947    5298 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/old-k8s-version-993000/config.json ...
	I0729 04:25:39.849960    5298 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/old-k8s-version-993000/config.json: {Name:mk1271b9e95b6010b4ccbd78fe1a522e464d9f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:25:39.853950    5298 start.go:360] acquireMachinesLock for old-k8s-version-993000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:39.853995    5298 start.go:364] duration metric: took 36.334µs to acquireMachinesLock for "old-k8s-version-993000"
	I0729 04:25:39.854009    5298 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:25:39.854041    5298 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:25:39.864653    5298 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:25:39.880497    5298 start.go:159] libmachine.API.Create for "old-k8s-version-993000" (driver="qemu2")
	I0729 04:25:39.880525    5298 client.go:168] LocalClient.Create starting
	I0729 04:25:39.880603    5298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:25:39.880636    5298 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:39.880645    5298 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:39.880687    5298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:25:39.880710    5298 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:39.880715    5298 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:39.881079    5298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:25:40.116864    5298 main.go:141] libmachine: Creating SSH key...
	I0729 04:25:40.297164    5298 main.go:141] libmachine: Creating Disk image...
	I0729 04:25:40.297177    5298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:25:40.297389    5298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2
	I0729 04:25:40.306899    5298 main.go:141] libmachine: STDOUT: 
	I0729 04:25:40.306919    5298 main.go:141] libmachine: STDERR: 
	I0729 04:25:40.306969    5298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2 +20000M
	I0729 04:25:40.315010    5298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:25:40.315028    5298 main.go:141] libmachine: STDERR: 
	I0729 04:25:40.315042    5298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2
	I0729 04:25:40.315046    5298 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:25:40.315062    5298 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:40.315093    5298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:8a:f9:ed:a6:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2
	I0729 04:25:40.317066    5298 main.go:141] libmachine: STDOUT: 
	I0729 04:25:40.317086    5298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:40.317109    5298 client.go:171] duration metric: took 436.589958ms to LocalClient.Create
	I0729 04:25:42.319264    5298 start.go:128] duration metric: took 2.465271s to createHost
	I0729 04:25:42.319395    5298 start.go:83] releasing machines lock for "old-k8s-version-993000", held for 2.465468625s
	W0729 04:25:42.319455    5298 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:42.332628    5298 out.go:177] * Deleting "old-k8s-version-993000" in qemu2 ...
	W0729 04:25:42.366492    5298 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:42.366560    5298 start.go:729] Will try again in 5 seconds ...
	I0729 04:25:47.368635    5298 start.go:360] acquireMachinesLock for old-k8s-version-993000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:47.369147    5298 start.go:364] duration metric: took 372.292µs to acquireMachinesLock for "old-k8s-version-993000"
	I0729 04:25:47.369225    5298 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:25:47.369584    5298 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:25:47.378159    5298 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:25:47.426901    5298 start.go:159] libmachine.API.Create for "old-k8s-version-993000" (driver="qemu2")
	I0729 04:25:47.426961    5298 client.go:168] LocalClient.Create starting
	I0729 04:25:47.427081    5298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:25:47.427148    5298 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:47.427165    5298 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:47.427222    5298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:25:47.427270    5298 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:47.427287    5298 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:47.427872    5298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:25:47.592679    5298 main.go:141] libmachine: Creating SSH key...
	I0729 04:25:47.683646    5298 main.go:141] libmachine: Creating Disk image...
	I0729 04:25:47.683658    5298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:25:47.683868    5298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2
	I0729 04:25:47.693090    5298 main.go:141] libmachine: STDOUT: 
	I0729 04:25:47.693106    5298 main.go:141] libmachine: STDERR: 
	I0729 04:25:47.693159    5298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2 +20000M
	I0729 04:25:47.701230    5298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:25:47.701246    5298 main.go:141] libmachine: STDERR: 
	I0729 04:25:47.701257    5298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2
	I0729 04:25:47.701263    5298 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:25:47.701274    5298 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:47.701305    5298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:9f:2f:dd:9c:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2
	I0729 04:25:47.702950    5298 main.go:141] libmachine: STDOUT: 
	I0729 04:25:47.702969    5298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:47.702982    5298 client.go:171] duration metric: took 276.024583ms to LocalClient.Create
	I0729 04:25:49.705144    5298 start.go:128] duration metric: took 2.335600334s to createHost
	I0729 04:25:49.705253    5298 start.go:83] releasing machines lock for "old-k8s-version-993000", held for 2.336157833s
	W0729 04:25:49.705689    5298 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:49.715261    5298 out.go:177] 
	W0729 04:25:49.722322    5298 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:25:49.722352    5298 out.go:239] * 
	* 
	W0729 04:25:49.724973    5298 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:25:49.733051    5298 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-993000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (66.157917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-993000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-993000 create -f testdata/busybox.yaml: exit status 1 (29.601458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-993000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-993000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (30.397375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-993000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (28.723708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-993000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-993000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-993000 describe deploy/metrics-server -n kube-system: exit status 1 (26.517ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-993000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-993000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (29.925333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-993000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-993000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.187576417s)

                                                
                                                
-- stdout --
	* [old-k8s-version-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-993000" primary control-plane node in "old-k8s-version-993000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-993000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-993000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:51.942585    5355 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:51.942710    5355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:51.942713    5355 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:51.942716    5355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:51.942844    5355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:25:51.943874    5355 out.go:298] Setting JSON to false
	I0729 04:25:51.960209    5355 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3314,"bootTime":1722249037,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:25:51.960283    5355 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:25:51.964466    5355 out.go:177] * [old-k8s-version-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:25:51.971407    5355 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:25:51.971475    5355 notify.go:220] Checking for updates...
	I0729 04:25:51.978434    5355 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:25:51.981431    5355 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:25:51.984449    5355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:25:51.987391    5355 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:25:51.990408    5355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:25:51.993601    5355 config.go:182] Loaded profile config "old-k8s-version-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 04:25:51.996384    5355 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 04:25:51.997700    5355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:25:52.002476    5355 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:25:52.009319    5355 start.go:297] selected driver: qemu2
	I0729 04:25:52.009326    5355 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-993000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:25:52.009389    5355 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:25:52.011616    5355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:25:52.011666    5355 cni.go:84] Creating CNI manager for ""
	I0729 04:25:52.011677    5355 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:25:52.011697    5355 start.go:340] cluster config:
	{Name:old-k8s-version-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-993000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:25:52.015208    5355 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:52.022452    5355 out.go:177] * Starting "old-k8s-version-993000" primary control-plane node in "old-k8s-version-993000" cluster
	I0729 04:25:52.026388    5355 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:25:52.026402    5355 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:25:52.026411    5355 cache.go:56] Caching tarball of preloaded images
	I0729 04:25:52.026468    5355 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:25:52.026474    5355 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:25:52.026534    5355 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/old-k8s-version-993000/config.json ...
	I0729 04:25:52.027020    5355 start.go:360] acquireMachinesLock for old-k8s-version-993000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:52.027050    5355 start.go:364] duration metric: took 24µs to acquireMachinesLock for "old-k8s-version-993000"
	I0729 04:25:52.027060    5355 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:25:52.027065    5355 fix.go:54] fixHost starting: 
	I0729 04:25:52.027182    5355 fix.go:112] recreateIfNeeded on old-k8s-version-993000: state=Stopped err=<nil>
	W0729 04:25:52.027191    5355 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:25:52.031449    5355 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-993000" ...
	I0729 04:25:52.039458    5355 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:52.039504    5355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:9f:2f:dd:9c:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2
	I0729 04:25:52.041622    5355 main.go:141] libmachine: STDOUT: 
	I0729 04:25:52.041642    5355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:52.041673    5355 fix.go:56] duration metric: took 14.608ms for fixHost
	I0729 04:25:52.041678    5355 start.go:83] releasing machines lock for "old-k8s-version-993000", held for 14.623792ms
	W0729 04:25:52.041684    5355 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:25:52.041724    5355 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:52.041729    5355 start.go:729] Will try again in 5 seconds ...
	I0729 04:25:57.043953    5355 start.go:360] acquireMachinesLock for old-k8s-version-993000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:57.044519    5355 start.go:364] duration metric: took 423.125µs to acquireMachinesLock for "old-k8s-version-993000"
	I0729 04:25:57.044680    5355 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:25:57.044701    5355 fix.go:54] fixHost starting: 
	I0729 04:25:57.045479    5355 fix.go:112] recreateIfNeeded on old-k8s-version-993000: state=Stopped err=<nil>
	W0729 04:25:57.045505    5355 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:25:57.054041    5355 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-993000" ...
	I0729 04:25:57.058064    5355 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:57.058302    5355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:9f:2f:dd:9c:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/old-k8s-version-993000/disk.qcow2
	I0729 04:25:57.067655    5355 main.go:141] libmachine: STDOUT: 
	I0729 04:25:57.067713    5355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:57.067814    5355 fix.go:56] duration metric: took 23.114042ms for fixHost
	I0729 04:25:57.067830    5355 start.go:83] releasing machines lock for "old-k8s-version-993000", held for 23.289542ms
	W0729 04:25:57.067980    5355 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-993000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-993000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:25:57.076046    5355 out.go:177] 
	W0729 04:25:57.080187    5355 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:25:57.080211    5355 out.go:239] * 
	* 
	W0729 04:25:57.083016    5355 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:25:57.090080    5355 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-993000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (62.637667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-993000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (32.033833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-993000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-993000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-993000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.007583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-993000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-993000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (29.307875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-993000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (28.134416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-993000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-993000 --alsologtostderr -v=1: exit status 83 (40.186542ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-993000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-993000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:57.351255    5378 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:57.352157    5378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:57.352161    5378 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:57.352163    5378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:57.352286    5378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:25:57.352485    5378 out.go:298] Setting JSON to false
	I0729 04:25:57.352493    5378 mustload.go:65] Loading cluster: old-k8s-version-993000
	I0729 04:25:57.352679    5378 config.go:182] Loaded profile config "old-k8s-version-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 04:25:57.355738    5378 out.go:177] * The control-plane node old-k8s-version-993000 host is not running: state=Stopped
	I0729 04:25:57.358495    5378 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-993000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-993000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (28.593083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-993000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (29.147625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-561000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-561000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.924637625s)

                                                
                                                
-- stdout --
	* [no-preload-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-561000" primary control-plane node in "no-preload-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:57.662429    5395 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:57.662556    5395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:57.662560    5395 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:57.662562    5395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:57.662714    5395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:25:57.663827    5395 out.go:298] Setting JSON to false
	I0729 04:25:57.679948    5395 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3320,"bootTime":1722249037,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:25:57.680018    5395 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:25:57.684561    5395 out.go:177] * [no-preload-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:25:57.691489    5395 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:25:57.691517    5395 notify.go:220] Checking for updates...
	I0729 04:25:57.698460    5395 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:25:57.701512    5395 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:25:57.704512    5395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:25:57.707433    5395 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:25:57.710493    5395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:25:57.713747    5395 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:57.713811    5395 config.go:182] Loaded profile config "stopped-upgrade-338000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:25:57.713863    5395 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:25:57.717520    5395 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:25:57.724490    5395 start.go:297] selected driver: qemu2
	I0729 04:25:57.724496    5395 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:25:57.724510    5395 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:25:57.726747    5395 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:25:57.729445    5395 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:25:57.732503    5395 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:25:57.732549    5395 cni.go:84] Creating CNI manager for ""
	I0729 04:25:57.732556    5395 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:25:57.732560    5395 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:25:57.732589    5395 start.go:340] cluster config:
	{Name:no-preload-561000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:25:57.736047    5395 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:57.741443    5395 out.go:177] * Starting "no-preload-561000" primary control-plane node in "no-preload-561000" cluster
	I0729 04:25:57.745454    5395 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:25:57.745525    5395 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/no-preload-561000/config.json ...
	I0729 04:25:57.745540    5395 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/no-preload-561000/config.json: {Name:mk9c667c08145f22782997fbe678a58ac205e55c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:25:57.745545    5395 cache.go:107] acquiring lock: {Name:mk2df94b52ac637de48a5553a8a3fa7c9ef4ed93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:57.745581    5395 cache.go:107] acquiring lock: {Name:mk85d364ec138399ee852f0daeddd76bffaa9f52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:57.745610    5395 cache.go:107] acquiring lock: {Name:mk7ca72ef82a7c754b3e4594d846df401b895d89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:57.745620    5395 cache.go:115] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 04:25:57.745604    5395 cache.go:107] acquiring lock: {Name:mk1352922ce3c1a11223cdca06038c7b39f9dc73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:57.745629    5395 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 84.916µs
	I0729 04:25:57.745643    5395 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 04:25:57.745625    5395 cache.go:107] acquiring lock: {Name:mk676caad28e53b8ca3541fa7e0aeff092798ca0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:57.745654    5395 cache.go:107] acquiring lock: {Name:mkb3bbf2caf22f25a4f46b98e680235931fbcae8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:57.745665    5395 cache.go:107] acquiring lock: {Name:mk5e886b508044bf1358ac320f7ba255eb42c75c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:57.745674    5395 cache.go:107] acquiring lock: {Name:mke1aeda748b2a151d6dfe74ed578c73b39b2820 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:25:57.745715    5395 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 04:25:57.745849    5395 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 04:25:57.745888    5395 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 04:25:57.745934    5395 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 04:25:57.746050    5395 start.go:360] acquireMachinesLock for no-preload-561000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:25:57.746081    5395 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 04:25:57.746098    5395 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 04:25:57.746113    5395 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 04:25:57.746161    5395 start.go:364] duration metric: took 92.792µs to acquireMachinesLock for "no-preload-561000"
	I0729 04:25:57.746172    5395 start.go:93] Provisioning new machine with config: &{Name:no-preload-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:25:57.746197    5395 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:25:57.753473    5395 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:25:57.756633    5395 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 04:25:57.756646    5395 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 04:25:57.756682    5395 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 04:25:57.756708    5395 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 04:25:57.756819    5395 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 04:25:57.758610    5395 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 04:25:57.758684    5395 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 04:25:57.769193    5395 start.go:159] libmachine.API.Create for "no-preload-561000" (driver="qemu2")
	I0729 04:25:57.769217    5395 client.go:168] LocalClient.Create starting
	I0729 04:25:57.769289    5395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:25:57.769317    5395 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:57.769342    5395 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:57.769392    5395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:25:57.769415    5395 main.go:141] libmachine: Decoding PEM data...
	I0729 04:25:57.769424    5395 main.go:141] libmachine: Parsing certificate...
	I0729 04:25:57.769722    5395 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:25:57.929001    5395 main.go:141] libmachine: Creating SSH key...
	I0729 04:25:58.167149    5395 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 04:25:58.169656    5395 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 04:25:58.175574    5395 main.go:141] libmachine: Creating Disk image...
	I0729 04:25:58.175583    5395 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:25:58.175777    5395 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2
	I0729 04:25:58.186093    5395 main.go:141] libmachine: STDOUT: 
	I0729 04:25:58.186106    5395 main.go:141] libmachine: STDERR: 
	I0729 04:25:58.186159    5395 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2 +20000M
	I0729 04:25:58.189680    5395 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 04:25:58.195498    5395 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:25:58.195509    5395 main.go:141] libmachine: STDERR: 
	I0729 04:25:58.195520    5395 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2
	I0729 04:25:58.195523    5395 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:25:58.195534    5395 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:25:58.195559    5395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:16:30:98:07:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2
	I0729 04:25:58.197707    5395 main.go:141] libmachine: STDOUT: 
	I0729 04:25:58.197721    5395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:25:58.197738    5395 client.go:171] duration metric: took 428.532292ms to LocalClient.Create
	I0729 04:25:58.205349    5395 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0729 04:25:58.232794    5395 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0729 04:25:58.248543    5395 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 04:25:58.269918    5395 cache.go:162] opening:  /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 04:25:58.353437    5395 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 04:25:58.353447    5395 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 607.835916ms
	I0729 04:25:58.353455    5395 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 04:26:00.197894    5395 start.go:128] duration metric: took 2.451713375s to createHost
	I0729 04:26:00.197946    5395 start.go:83] releasing machines lock for "no-preload-561000", held for 2.45185725s
	W0729 04:26:00.198006    5395 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:00.207831    5395 out.go:177] * Deleting "no-preload-561000" in qemu2 ...
	W0729 04:26:00.232948    5395 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:00.232981    5395 start.go:729] Will try again in 5 seconds ...
	I0729 04:26:00.689915    5395 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 04:26:00.689956    5395 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.9444385s
	I0729 04:26:00.689973    5395 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 04:26:01.378428    5395 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 04:26:01.378455    5395 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.632904417s
	I0729 04:26:01.378484    5395 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 04:26:01.532498    5395 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 04:26:01.532542    5395 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 3.787105875s
	I0729 04:26:01.532566    5395 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 04:26:01.922668    5395 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 04:26:01.922692    5395 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.177171292s
	I0729 04:26:01.922707    5395 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 04:26:02.575511    5395 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 04:26:02.575556    5395 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.830126042s
	I0729 04:26:02.575576    5395 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 04:26:05.234266    5395 start.go:360] acquireMachinesLock for no-preload-561000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:05.234716    5395 start.go:364] duration metric: took 374.75µs to acquireMachinesLock for "no-preload-561000"
	I0729 04:26:05.234830    5395 start.go:93] Provisioning new machine with config: &{Name:no-preload-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:26:05.234990    5395 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:26:05.244312    5395 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:26:05.286166    5395 start.go:159] libmachine.API.Create for "no-preload-561000" (driver="qemu2")
	I0729 04:26:05.286206    5395 client.go:168] LocalClient.Create starting
	I0729 04:26:05.286326    5395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:26:05.286383    5395 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:05.286396    5395 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:05.286462    5395 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:26:05.286515    5395 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:05.286529    5395 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:05.287067    5395 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:26:05.445882    5395 main.go:141] libmachine: Creating SSH key...
	I0729 04:26:05.494550    5395 main.go:141] libmachine: Creating Disk image...
	I0729 04:26:05.494561    5395 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:26:05.494748    5395 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2
	I0729 04:26:05.503875    5395 main.go:141] libmachine: STDOUT: 
	I0729 04:26:05.503893    5395 main.go:141] libmachine: STDERR: 
	I0729 04:26:05.503946    5395 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2 +20000M
	I0729 04:26:05.512027    5395 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:26:05.512044    5395 main.go:141] libmachine: STDERR: 
	I0729 04:26:05.512053    5395 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2
	I0729 04:26:05.512062    5395 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:26:05.512069    5395 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:05.512107    5395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:81:f5:a7:3a:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2
	I0729 04:26:05.513859    5395 main.go:141] libmachine: STDOUT: 
	I0729 04:26:05.513875    5395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:05.513900    5395 client.go:171] duration metric: took 227.687ms to LocalClient.Create
	I0729 04:26:05.813406    5395 cache.go:157] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 04:26:05.813438    5395 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 8.068093875s
	I0729 04:26:05.813452    5395 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 04:26:05.813473    5395 cache.go:87] Successfully saved all images to host disk.
	I0729 04:26:07.516053    5395 start.go:128] duration metric: took 2.281109458s to createHost
	I0729 04:26:07.516129    5395 start.go:83] releasing machines lock for "no-preload-561000", held for 2.281459s
	W0729 04:26:07.516508    5395 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:07.526124    5395 out.go:177] 
	W0729 04:26:07.533223    5395 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:07.533302    5395 out.go:239] * 
	* 
	W0729 04:26:07.535835    5395 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:26:07.543963    5395 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-561000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (65.341166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-561000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-561000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-561000 create -f testdata/busybox.yaml: exit status 1 (30.490792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-561000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-561000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (29.669542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-561000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (28.4215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-561000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-561000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-561000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-561000 describe deploy/metrics-server -n kube-system: exit status 1 (27.03325ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-561000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-561000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (29.294083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-561000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-561000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-561000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.189111583s)

                                                
                                                
-- stdout --
	* [no-preload-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-561000" primary control-plane node in "no-preload-561000" cluster
	* Restarting existing qemu2 VM for "no-preload-561000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-561000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:09.881639    5463 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:09.881755    5463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:09.881757    5463 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:09.881760    5463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:09.881899    5463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:09.882960    5463 out.go:298] Setting JSON to false
	I0729 04:26:09.899336    5463 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3332,"bootTime":1722249037,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:26:09.899413    5463 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:26:09.904783    5463 out.go:177] * [no-preload-561000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:26:09.911672    5463 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:26:09.911705    5463 notify.go:220] Checking for updates...
	I0729 04:26:09.917826    5463 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:26:09.919228    5463 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:26:09.922795    5463 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:26:09.925868    5463 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:26:09.928810    5463 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:26:09.932172    5463 config.go:182] Loaded profile config "no-preload-561000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 04:26:09.932420    5463 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:26:09.936755    5463 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:26:09.944033    5463 start.go:297] selected driver: qemu2
	I0729 04:26:09.944040    5463 start.go:901] validating driver "qemu2" against &{Name:no-preload-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-561000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:09.944097    5463 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:26:09.946417    5463 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:26:09.946454    5463 cni.go:84] Creating CNI manager for ""
	I0729 04:26:09.946463    5463 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:26:09.946495    5463 start.go:340] cluster config:
	{Name:no-preload-561000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-561000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:09.949939    5463 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:09.956731    5463 out.go:177] * Starting "no-preload-561000" primary control-plane node in "no-preload-561000" cluster
	I0729 04:26:09.960740    5463 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:26:09.960795    5463 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/no-preload-561000/config.json ...
	I0729 04:26:09.960814    5463 cache.go:107] acquiring lock: {Name:mk2df94b52ac637de48a5553a8a3fa7c9ef4ed93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:09.960865    5463 cache.go:115] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 04:26:09.960873    5463 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 61.208µs
	I0729 04:26:09.960926    5463 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 04:26:09.960907    5463 cache.go:107] acquiring lock: {Name:mk1352922ce3c1a11223cdca06038c7b39f9dc73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:09.960935    5463 cache.go:107] acquiring lock: {Name:mk7ca72ef82a7c754b3e4594d846df401b895d89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:09.960959    5463 cache.go:107] acquiring lock: {Name:mk676caad28e53b8ca3541fa7e0aeff092798ca0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:09.960933    5463 cache.go:107] acquiring lock: {Name:mkb3bbf2caf22f25a4f46b98e680235931fbcae8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:09.960956    5463 cache.go:107] acquiring lock: {Name:mk85d364ec138399ee852f0daeddd76bffaa9f52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:09.960964    5463 cache.go:107] acquiring lock: {Name:mk5e886b508044bf1358ac320f7ba255eb42c75c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:09.961007    5463 cache.go:107] acquiring lock: {Name:mke1aeda748b2a151d6dfe74ed578c73b39b2820 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:09.961080    5463 cache.go:115] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 04:26:09.961095    5463 cache.go:115] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 04:26:09.961102    5463 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 185.167µs
	I0729 04:26:09.961105    5463 cache.go:115] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 04:26:09.961110    5463 cache.go:115] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 04:26:09.961111    5463 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 181.542µs
	I0729 04:26:09.961117    5463 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 04:26:09.961114    5463 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 132µs
	I0729 04:26:09.961102    5463 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 139.125µs
	I0729 04:26:09.961125    5463 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 04:26:09.961114    5463 cache.go:115] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 04:26:09.961106    5463 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 04:26:09.961121    5463 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 04:26:09.961130    5463 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 194.708µs
	I0729 04:26:09.961133    5463 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 04:26:09.961156    5463 cache.go:115] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 04:26:09.961161    5463 cache.go:115] /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 04:26:09.961186    5463 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 314.417µs
	I0729 04:26:09.961191    5463 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 04:26:09.961163    5463 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 228.833µs
	I0729 04:26:09.961196    5463 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 04:26:09.961198    5463 cache.go:87] Successfully saved all images to host disk.
	I0729 04:26:09.961237    5463 start.go:360] acquireMachinesLock for no-preload-561000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:09.961267    5463 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "no-preload-561000"
	I0729 04:26:09.961277    5463 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:09.961283    5463 fix.go:54] fixHost starting: 
	I0729 04:26:09.961390    5463 fix.go:112] recreateIfNeeded on no-preload-561000: state=Stopped err=<nil>
	W0729 04:26:09.961401    5463 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:09.969752    5463 out.go:177] * Restarting existing qemu2 VM for "no-preload-561000" ...
	I0729 04:26:09.973796    5463 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:09.973835    5463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:81:f5:a7:3a:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2
	I0729 04:26:09.975817    5463 main.go:141] libmachine: STDOUT: 
	I0729 04:26:09.975838    5463 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:09.975863    5463 fix.go:56] duration metric: took 14.581041ms for fixHost
	I0729 04:26:09.975866    5463 start.go:83] releasing machines lock for "no-preload-561000", held for 14.595875ms
	W0729 04:26:09.975870    5463 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:09.975915    5463 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:09.975919    5463 start.go:729] Will try again in 5 seconds ...
	I0729 04:26:14.977917    5463 start.go:360] acquireMachinesLock for no-preload-561000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:14.978309    5463 start.go:364] duration metric: took 316.292µs to acquireMachinesLock for "no-preload-561000"
	I0729 04:26:14.978440    5463 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:14.978459    5463 fix.go:54] fixHost starting: 
	I0729 04:26:14.979202    5463 fix.go:112] recreateIfNeeded on no-preload-561000: state=Stopped err=<nil>
	W0729 04:26:14.979231    5463 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:14.994704    5463 out.go:177] * Restarting existing qemu2 VM for "no-preload-561000" ...
	I0729 04:26:14.997647    5463 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:14.997829    5463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:81:f5:a7:3a:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/no-preload-561000/disk.qcow2
	I0729 04:26:15.006927    5463 main.go:141] libmachine: STDOUT: 
	I0729 04:26:15.006991    5463 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:15.007118    5463 fix.go:56] duration metric: took 28.621ms for fixHost
	I0729 04:26:15.007139    5463 start.go:83] releasing machines lock for "no-preload-561000", held for 28.810209ms
	W0729 04:26:15.007319    5463 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-561000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-561000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:15.015511    5463 out.go:177] 
	W0729 04:26:15.018585    5463 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:15.018608    5463 out.go:239] * 
	* 
	W0729 04:26:15.021089    5463 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:26:15.029421    5463 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-561000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (65.701708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-561000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-789000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-789000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.830081958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-789000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-789000" primary control-plane node in "default-k8s-diff-port-789000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-789000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:12.266881    5473 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:12.267000    5473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:12.267003    5473 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:12.267005    5473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:12.267147    5473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:12.268228    5473 out.go:298] Setting JSON to false
	I0729 04:26:12.284223    5473 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3335,"bootTime":1722249037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:26:12.284295    5473 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:26:12.288753    5473 out.go:177] * [default-k8s-diff-port-789000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:26:12.295696    5473 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:26:12.295723    5473 notify.go:220] Checking for updates...
	I0729 04:26:12.302725    5473 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:26:12.305718    5473 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:26:12.308678    5473 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:26:12.311715    5473 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:26:12.314706    5473 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:26:12.317966    5473 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:12.318043    5473 config.go:182] Loaded profile config "no-preload-561000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 04:26:12.318088    5473 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:26:12.321697    5473 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:26:12.328645    5473 start.go:297] selected driver: qemu2
	I0729 04:26:12.328651    5473 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:26:12.328658    5473 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:26:12.330864    5473 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:26:12.333667    5473 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:26:12.336705    5473 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:26:12.336720    5473 cni.go:84] Creating CNI manager for ""
	I0729 04:26:12.336730    5473 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:26:12.336734    5473 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:26:12.336758    5473 start.go:340] cluster config:
	{Name:default-k8s-diff-port-789000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:12.340373    5473 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:12.347717    5473 out.go:177] * Starting "default-k8s-diff-port-789000" primary control-plane node in "default-k8s-diff-port-789000" cluster
	I0729 04:26:12.351726    5473 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:26:12.351738    5473 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:26:12.351746    5473 cache.go:56] Caching tarball of preloaded images
	I0729 04:26:12.351796    5473 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:26:12.351802    5473 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:26:12.351865    5473 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/default-k8s-diff-port-789000/config.json ...
	I0729 04:26:12.351877    5473 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/default-k8s-diff-port-789000/config.json: {Name:mk0676dfbe092759cbe047a9953aff3164845602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:26:12.352084    5473 start.go:360] acquireMachinesLock for default-k8s-diff-port-789000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:12.352120    5473 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "default-k8s-diff-port-789000"
	I0729 04:26:12.352133    5473 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:26:12.352161    5473 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:26:12.360694    5473 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:26:12.378008    5473 start.go:159] libmachine.API.Create for "default-k8s-diff-port-789000" (driver="qemu2")
	I0729 04:26:12.378032    5473 client.go:168] LocalClient.Create starting
	I0729 04:26:12.378108    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:26:12.378140    5473 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:12.378150    5473 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:12.378191    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:26:12.378220    5473 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:12.378227    5473 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:12.378633    5473 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:26:12.538856    5473 main.go:141] libmachine: Creating SSH key...
	I0729 04:26:12.642565    5473 main.go:141] libmachine: Creating Disk image...
	I0729 04:26:12.642570    5473 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:26:12.642749    5473 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2
	I0729 04:26:12.652175    5473 main.go:141] libmachine: STDOUT: 
	I0729 04:26:12.652195    5473 main.go:141] libmachine: STDERR: 
	I0729 04:26:12.652253    5473 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2 +20000M
	I0729 04:26:12.659996    5473 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:26:12.660010    5473 main.go:141] libmachine: STDERR: 
	I0729 04:26:12.660023    5473 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2
	I0729 04:26:12.660034    5473 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:26:12.660043    5473 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:12.660071    5473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:14:17:63:8c:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2
	I0729 04:26:12.661607    5473 main.go:141] libmachine: STDOUT: 
	I0729 04:26:12.661621    5473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:12.661642    5473 client.go:171] duration metric: took 283.615167ms to LocalClient.Create
	I0729 04:26:14.663747    5473 start.go:128] duration metric: took 2.311635167s to createHost
	I0729 04:26:14.663846    5473 start.go:83] releasing machines lock for "default-k8s-diff-port-789000", held for 2.311789792s
	W0729 04:26:14.663923    5473 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:14.674858    5473 out.go:177] * Deleting "default-k8s-diff-port-789000" in qemu2 ...
	W0729 04:26:14.707776    5473 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:14.707845    5473 start.go:729] Will try again in 5 seconds ...
	I0729 04:26:19.709926    5473 start.go:360] acquireMachinesLock for default-k8s-diff-port-789000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:19.710372    5473 start.go:364] duration metric: took 333.333µs to acquireMachinesLock for "default-k8s-diff-port-789000"
	I0729 04:26:19.710512    5473 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:26:19.710805    5473 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:26:19.716481    5473 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:26:19.769452    5473 start.go:159] libmachine.API.Create for "default-k8s-diff-port-789000" (driver="qemu2")
	I0729 04:26:19.769502    5473 client.go:168] LocalClient.Create starting
	I0729 04:26:19.769622    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:26:19.769687    5473 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:19.769720    5473 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:19.769781    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:26:19.769840    5473 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:19.769853    5473 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:19.770399    5473 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:26:19.933435    5473 main.go:141] libmachine: Creating SSH key...
	I0729 04:26:20.003524    5473 main.go:141] libmachine: Creating Disk image...
	I0729 04:26:20.003530    5473 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:26:20.003714    5473 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2
	I0729 04:26:20.013033    5473 main.go:141] libmachine: STDOUT: 
	I0729 04:26:20.013049    5473 main.go:141] libmachine: STDERR: 
	I0729 04:26:20.013115    5473 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2 +20000M
	I0729 04:26:20.020956    5473 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:26:20.020978    5473 main.go:141] libmachine: STDERR: 
	I0729 04:26:20.020990    5473 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2
	I0729 04:26:20.020994    5473 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:26:20.021002    5473 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:20.021029    5473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:08:b8:66:a0:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2
	I0729 04:26:20.022658    5473 main.go:141] libmachine: STDOUT: 
	I0729 04:26:20.022674    5473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:20.022686    5473 client.go:171] duration metric: took 253.184417ms to LocalClient.Create
	I0729 04:26:22.024829    5473 start.go:128] duration metric: took 2.314061083s to createHost
	I0729 04:26:22.024915    5473 start.go:83] releasing machines lock for "default-k8s-diff-port-789000", held for 2.31459375s
	W0729 04:26:22.025198    5473 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-789000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-789000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:22.032255    5473 out.go:177] 
	W0729 04:26:22.042330    5473 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:22.042373    5473 out.go:239] * 
	* 
	W0729 04:26:22.044834    5473 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:26:22.055132    5473 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-789000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (68.429166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-561000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (31.433709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-561000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-561000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-561000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-561000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.649ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-561000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-561000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (28.560083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-561000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-561000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (28.662833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-561000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-561000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-561000 --alsologtostderr -v=1: exit status 83 (38.938ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-561000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-561000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:15.292880    5495 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:15.293016    5495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:15.293020    5495 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:15.293022    5495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:15.293144    5495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:15.293353    5495 out.go:298] Setting JSON to false
	I0729 04:26:15.293359    5495 mustload.go:65] Loading cluster: no-preload-561000
	I0729 04:26:15.293555    5495 config.go:182] Loaded profile config "no-preload-561000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 04:26:15.297401    5495 out.go:177] * The control-plane node no-preload-561000 host is not running: state=Stopped
	I0729 04:26:15.300548    5495 out.go:177]   To start a cluster, run: "minikube start -p no-preload-561000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-561000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (28.693209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-561000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (28.895125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-561000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-469000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-469000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.901776125s)

                                                
                                                
-- stdout --
	* [newest-cni-469000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-469000" primary control-plane node in "newest-cni-469000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-469000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:15.607572    5512 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:15.607715    5512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:15.607718    5512 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:15.607720    5512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:15.607867    5512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:15.609023    5512 out.go:298] Setting JSON to false
	I0729 04:26:15.625042    5512 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3338,"bootTime":1722249037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:26:15.625103    5512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:26:15.628574    5512 out.go:177] * [newest-cni-469000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:26:15.635563    5512 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:26:15.635675    5512 notify.go:220] Checking for updates...
	I0729 04:26:15.642495    5512 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:26:15.645493    5512 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:26:15.648510    5512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:26:15.651497    5512 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:26:15.654504    5512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:26:15.657901    5512 config.go:182] Loaded profile config "default-k8s-diff-port-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:15.657965    5512 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:15.658012    5512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:26:15.662467    5512 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:26:15.669453    5512 start.go:297] selected driver: qemu2
	I0729 04:26:15.669460    5512 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:26:15.669465    5512 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:26:15.671773    5512 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 04:26:15.671797    5512 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 04:26:15.679505    5512 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:26:15.682588    5512 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 04:26:15.682635    5512 cni.go:84] Creating CNI manager for ""
	I0729 04:26:15.682643    5512 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:26:15.682647    5512 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:26:15.682671    5512 start.go:340] cluster config:
	{Name:newest-cni-469000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-469000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:15.686382    5512 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:15.693508    5512 out.go:177] * Starting "newest-cni-469000" primary control-plane node in "newest-cni-469000" cluster
	I0729 04:26:15.697484    5512 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:26:15.697502    5512 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:26:15.697511    5512 cache.go:56] Caching tarball of preloaded images
	I0729 04:26:15.697574    5512 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:26:15.697582    5512 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 04:26:15.697652    5512 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/newest-cni-469000/config.json ...
	I0729 04:26:15.697664    5512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/newest-cni-469000/config.json: {Name:mkb9c8132a00a91039492d05e1632fb1298e6072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:26:15.697886    5512 start.go:360] acquireMachinesLock for newest-cni-469000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:15.697919    5512 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "newest-cni-469000"
	I0729 04:26:15.697931    5512 start.go:93] Provisioning new machine with config: &{Name:newest-cni-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-469000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:26:15.697965    5512 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:26:15.702364    5512 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:26:15.720082    5512 start.go:159] libmachine.API.Create for "newest-cni-469000" (driver="qemu2")
	I0729 04:26:15.720105    5512 client.go:168] LocalClient.Create starting
	I0729 04:26:15.720165    5512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:26:15.720196    5512 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:15.720208    5512 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:15.720240    5512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:26:15.720262    5512 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:15.720268    5512 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:15.720675    5512 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:26:15.874113    5512 main.go:141] libmachine: Creating SSH key...
	I0729 04:26:16.106153    5512 main.go:141] libmachine: Creating Disk image...
	I0729 04:26:16.106164    5512 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:26:16.106406    5512 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2
	I0729 04:26:16.116341    5512 main.go:141] libmachine: STDOUT: 
	I0729 04:26:16.116356    5512 main.go:141] libmachine: STDERR: 
	I0729 04:26:16.116416    5512 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2 +20000M
	I0729 04:26:16.124291    5512 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:26:16.124304    5512 main.go:141] libmachine: STDERR: 
	I0729 04:26:16.124322    5512 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2
	I0729 04:26:16.124327    5512 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:26:16.124337    5512 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:16.124363    5512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:9b:14:04:a4:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2
	I0729 04:26:16.125946    5512 main.go:141] libmachine: STDOUT: 
	I0729 04:26:16.125961    5512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:16.125977    5512 client.go:171] duration metric: took 405.879167ms to LocalClient.Create
	I0729 04:26:18.128078    5512 start.go:128] duration metric: took 2.430170625s to createHost
	I0729 04:26:18.128122    5512 start.go:83] releasing machines lock for "newest-cni-469000", held for 2.430272667s
	W0729 04:26:18.128193    5512 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:18.139238    5512 out.go:177] * Deleting "newest-cni-469000" in qemu2 ...
	W0729 04:26:18.170624    5512 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:18.170648    5512 start.go:729] Will try again in 5 seconds ...
	I0729 04:26:23.172661    5512 start.go:360] acquireMachinesLock for newest-cni-469000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:23.173137    5512 start.go:364] duration metric: took 370.5µs to acquireMachinesLock for "newest-cni-469000"
	I0729 04:26:23.173322    5512 start.go:93] Provisioning new machine with config: &{Name:newest-cni-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-469000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:26:23.173583    5512 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:26:23.179317    5512 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:26:23.228291    5512 start.go:159] libmachine.API.Create for "newest-cni-469000" (driver="qemu2")
	I0729 04:26:23.228342    5512 client.go:168] LocalClient.Create starting
	I0729 04:26:23.228444    5512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:26:23.228489    5512 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:23.228506    5512 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:23.228567    5512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:26:23.228611    5512 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:23.228625    5512 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:23.229235    5512 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:26:23.391732    5512 main.go:141] libmachine: Creating SSH key...
	I0729 04:26:23.422634    5512 main.go:141] libmachine: Creating Disk image...
	I0729 04:26:23.422640    5512 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:26:23.422818    5512 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2
	I0729 04:26:23.431915    5512 main.go:141] libmachine: STDOUT: 
	I0729 04:26:23.431937    5512 main.go:141] libmachine: STDERR: 
	I0729 04:26:23.431986    5512 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2 +20000M
	I0729 04:26:23.439726    5512 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:26:23.439747    5512 main.go:141] libmachine: STDERR: 
	I0729 04:26:23.439757    5512 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2
	I0729 04:26:23.439762    5512 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:26:23.439781    5512 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:23.439808    5512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:3e:52:bc:64:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2
	I0729 04:26:23.441398    5512 main.go:141] libmachine: STDOUT: 
	I0729 04:26:23.441419    5512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:23.441435    5512 client.go:171] duration metric: took 213.093417ms to LocalClient.Create
	I0729 04:26:25.443572    5512 start.go:128] duration metric: took 2.270032291s to createHost
	I0729 04:26:25.443637    5512 start.go:83] releasing machines lock for "newest-cni-469000", held for 2.270533584s
	W0729 04:26:25.443916    5512 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-469000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-469000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:25.455396    5512 out.go:177] 
	W0729 04:26:25.459529    5512 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:25.459553    5512 out.go:239] * 
	* 
	W0729 04:26:25.462058    5512 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:26:25.469349    5512 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-469000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000: exit status 7 (61.608709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-789000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-789000 create -f testdata/busybox.yaml: exit status 1 (30.621542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-789000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-789000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (28.932042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-789000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (28.808458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-789000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-789000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-789000 describe deploy/metrics-server -n kube-system: exit status 1 (26.131833ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-789000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-789000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (28.213292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-789000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-789000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.00936025s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-789000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-789000" primary control-plane node in "default-k8s-diff-port-789000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-789000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-789000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:24.548404    5561 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:24.548525    5561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:24.548528    5561 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:24.548537    5561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:24.548673    5561 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:24.549735    5561 out.go:298] Setting JSON to false
	I0729 04:26:24.565604    5561 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3347,"bootTime":1722249037,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:26:24.565680    5561 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:26:24.570448    5561 out.go:177] * [default-k8s-diff-port-789000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:26:24.577374    5561 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:26:24.577408    5561 notify.go:220] Checking for updates...
	I0729 04:26:24.594651    5561 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:26:24.597382    5561 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:26:24.600336    5561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:26:24.603326    5561 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:26:24.606326    5561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:26:24.609638    5561 config.go:182] Loaded profile config "default-k8s-diff-port-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:24.609908    5561 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:26:24.613353    5561 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:26:24.620349    5561 start.go:297] selected driver: qemu2
	I0729 04:26:24.620355    5561 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:24.620432    5561 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:26:24.622824    5561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:26:24.622931    5561 cni.go:84] Creating CNI manager for ""
	I0729 04:26:24.622940    5561 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:26:24.622976    5561 start.go:340] cluster config:
	{Name:default-k8s-diff-port-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-789000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:24.626512    5561 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:24.634329    5561 out.go:177] * Starting "default-k8s-diff-port-789000" primary control-plane node in "default-k8s-diff-port-789000" cluster
	I0729 04:26:24.638200    5561 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:26:24.638216    5561 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:26:24.638225    5561 cache.go:56] Caching tarball of preloaded images
	I0729 04:26:24.638281    5561 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:26:24.638287    5561 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:26:24.638352    5561 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/default-k8s-diff-port-789000/config.json ...
	I0729 04:26:24.638857    5561 start.go:360] acquireMachinesLock for default-k8s-diff-port-789000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:25.443799    5561 start.go:364] duration metric: took 804.904959ms to acquireMachinesLock for "default-k8s-diff-port-789000"
	I0729 04:26:25.443982    5561 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:25.444025    5561 fix.go:54] fixHost starting: 
	I0729 04:26:25.444760    5561 fix.go:112] recreateIfNeeded on default-k8s-diff-port-789000: state=Stopped err=<nil>
	W0729 04:26:25.444810    5561 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:25.455398    5561 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-789000" ...
	I0729 04:26:25.459493    5561 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:25.459699    5561 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:08:b8:66:a0:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2
	I0729 04:26:25.469604    5561 main.go:141] libmachine: STDOUT: 
	I0729 04:26:25.469673    5561 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:25.469876    5561 fix.go:56] duration metric: took 25.854541ms for fixHost
	I0729 04:26:25.469902    5561 start.go:83] releasing machines lock for "default-k8s-diff-port-789000", held for 26.066709ms
	W0729 04:26:25.469929    5561 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:25.470085    5561 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:25.470100    5561 start.go:729] Will try again in 5 seconds ...
	I0729 04:26:30.472214    5561 start.go:360] acquireMachinesLock for default-k8s-diff-port-789000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:30.472640    5561 start.go:364] duration metric: took 316.666µs to acquireMachinesLock for "default-k8s-diff-port-789000"
	I0729 04:26:30.472782    5561 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:30.472801    5561 fix.go:54] fixHost starting: 
	I0729 04:26:30.473572    5561 fix.go:112] recreateIfNeeded on default-k8s-diff-port-789000: state=Stopped err=<nil>
	W0729 04:26:30.473600    5561 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:30.482236    5561 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-789000" ...
	I0729 04:26:30.486287    5561 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:30.486514    5561 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:08:b8:66:a0:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/default-k8s-diff-port-789000/disk.qcow2
	I0729 04:26:30.495768    5561 main.go:141] libmachine: STDOUT: 
	I0729 04:26:30.495832    5561 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:30.495903    5561 fix.go:56] duration metric: took 23.099958ms for fixHost
	I0729 04:26:30.495923    5561 start.go:83] releasing machines lock for "default-k8s-diff-port-789000", held for 23.263583ms
	W0729 04:26:30.496152    5561 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:30.503222    5561 out.go:177] 
	W0729 04:26:30.507395    5561 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:30.507424    5561 out.go:239] * 
	* 
	W0729 04:26:30.509992    5561 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:26:30.517206    5561 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-789000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (69.583083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-469000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-469000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (6.052263417s)

                                                
                                                
-- stdout --
	* [newest-cni-469000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-469000" primary control-plane node in "newest-cni-469000" cluster
	* Restarting existing qemu2 VM for "newest-cni-469000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-469000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:27.582579    5586 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:27.582711    5586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:27.582714    5586 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:27.582717    5586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:27.582852    5586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:27.583848    5586 out.go:298] Setting JSON to false
	I0729 04:26:27.599570    5586 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3350,"bootTime":1722249037,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:26:27.599647    5586 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:26:27.604224    5586 out.go:177] * [newest-cni-469000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:26:27.610231    5586 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:26:27.610276    5586 notify.go:220] Checking for updates...
	I0729 04:26:27.617202    5586 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:26:27.620237    5586 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:26:27.623260    5586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:26:27.626202    5586 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:26:27.629223    5586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:26:27.632510    5586 config.go:182] Loaded profile config "newest-cni-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 04:26:27.632766    5586 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:26:27.637191    5586 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:26:27.644165    5586 start.go:297] selected driver: qemu2
	I0729 04:26:27.644175    5586 start.go:901] validating driver "qemu2" against &{Name:newest-cni-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-469000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:27.644250    5586 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:26:27.646562    5586 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 04:26:27.646602    5586 cni.go:84] Creating CNI manager for ""
	I0729 04:26:27.646609    5586 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:26:27.646630    5586 start.go:340] cluster config:
	{Name:newest-cni-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-469000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:27.650193    5586 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:27.658067    5586 out.go:177] * Starting "newest-cni-469000" primary control-plane node in "newest-cni-469000" cluster
	I0729 04:26:27.662208    5586 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:26:27.662226    5586 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:26:27.662238    5586 cache.go:56] Caching tarball of preloaded images
	I0729 04:26:27.662317    5586 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:26:27.662323    5586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 04:26:27.662395    5586 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/newest-cni-469000/config.json ...
	I0729 04:26:27.662864    5586 start.go:360] acquireMachinesLock for newest-cni-469000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:27.662892    5586 start.go:364] duration metric: took 21.375µs to acquireMachinesLock for "newest-cni-469000"
	I0729 04:26:27.662902    5586 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:27.662907    5586 fix.go:54] fixHost starting: 
	I0729 04:26:27.663017    5586 fix.go:112] recreateIfNeeded on newest-cni-469000: state=Stopped err=<nil>
	W0729 04:26:27.663025    5586 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:27.666169    5586 out.go:177] * Restarting existing qemu2 VM for "newest-cni-469000" ...
	I0729 04:26:27.674191    5586 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:27.674236    5586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:3e:52:bc:64:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2
	I0729 04:26:27.676295    5586 main.go:141] libmachine: STDOUT: 
	I0729 04:26:27.676315    5586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:27.676345    5586 fix.go:56] duration metric: took 13.437666ms for fixHost
	I0729 04:26:27.676349    5586 start.go:83] releasing machines lock for "newest-cni-469000", held for 13.453208ms
	W0729 04:26:27.676355    5586 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:27.676385    5586 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:27.676389    5586 start.go:729] Will try again in 5 seconds ...
	I0729 04:26:32.678497    5586 start.go:360] acquireMachinesLock for newest-cni-469000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:33.537417    5586 start.go:364] duration metric: took 858.820625ms to acquireMachinesLock for "newest-cni-469000"
	I0729 04:26:33.537636    5586 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:33.537657    5586 fix.go:54] fixHost starting: 
	I0729 04:26:33.538368    5586 fix.go:112] recreateIfNeeded on newest-cni-469000: state=Stopped err=<nil>
	W0729 04:26:33.538395    5586 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:33.543958    5586 out.go:177] * Restarting existing qemu2 VM for "newest-cni-469000" ...
	I0729 04:26:33.557864    5586 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:33.558099    5586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:3e:52:bc:64:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/newest-cni-469000/disk.qcow2
	I0729 04:26:33.567728    5586 main.go:141] libmachine: STDOUT: 
	I0729 04:26:33.567788    5586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:33.567872    5586 fix.go:56] duration metric: took 30.218291ms for fixHost
	I0729 04:26:33.567894    5586 start.go:83] releasing machines lock for "newest-cni-469000", held for 30.44375ms
	W0729 04:26:33.568066    5586 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-469000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-469000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:33.576684    5586 out.go:177] 
	W0729 04:26:33.580875    5586 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:33.580894    5586 out.go:239] * 
	* 
	W0729 04:26:33.583202    5586 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:26:33.593824    5586 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-469000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000: exit status 7 (60.048666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-789000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (32.416875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-789000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.037583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-789000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (28.01625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-789000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (27.65025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-789000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-789000 --alsologtostderr -v=1: exit status 83 (39.815875ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-789000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-789000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:30.786265    5605 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:30.786419    5605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:30.786422    5605 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:30.786424    5605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:30.786548    5605 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:30.786755    5605 out.go:298] Setting JSON to false
	I0729 04:26:30.786761    5605 mustload.go:65] Loading cluster: default-k8s-diff-port-789000
	I0729 04:26:30.786957    5605 config.go:182] Loaded profile config "default-k8s-diff-port-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:30.790932    5605 out.go:177] * The control-plane node default-k8s-diff-port-789000 host is not running: state=Stopped
	I0729 04:26:30.794909    5605 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-789000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-789000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (28.016583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-789000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (28.852833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-022000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-022000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.928885292s)

                                                
                                                
-- stdout --
	* [embed-certs-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-022000" primary control-plane node in "embed-certs-022000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-022000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:31.196988    5629 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:31.197120    5629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:31.197123    5629 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:31.197125    5629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:31.197258    5629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:31.198321    5629 out.go:298] Setting JSON to false
	I0729 04:26:31.214349    5629 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3354,"bootTime":1722249037,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:26:31.214412    5629 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:26:31.218965    5629 out.go:177] * [embed-certs-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:26:31.224943    5629 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:26:31.224997    5629 notify.go:220] Checking for updates...
	I0729 04:26:31.231894    5629 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:26:31.234964    5629 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:26:31.237884    5629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:26:31.240934    5629 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:26:31.243898    5629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:26:31.247170    5629 config.go:182] Loaded profile config "multinode-369000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:31.247239    5629 config.go:182] Loaded profile config "newest-cni-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 04:26:31.247290    5629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:26:31.251901    5629 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:26:31.258859    5629 start.go:297] selected driver: qemu2
	I0729 04:26:31.258865    5629 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:26:31.258871    5629 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:26:31.261040    5629 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:26:31.263838    5629 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:26:31.266983    5629 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:26:31.267016    5629 cni.go:84] Creating CNI manager for ""
	I0729 04:26:31.267025    5629 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:26:31.267031    5629 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:26:31.267064    5629 start.go:340] cluster config:
	{Name:embed-certs-022000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:31.270712    5629 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:31.277905    5629 out.go:177] * Starting "embed-certs-022000" primary control-plane node in "embed-certs-022000" cluster
	I0729 04:26:31.281871    5629 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:26:31.281884    5629 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:26:31.281893    5629 cache.go:56] Caching tarball of preloaded images
	I0729 04:26:31.281949    5629 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:26:31.281965    5629 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:26:31.282042    5629 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/embed-certs-022000/config.json ...
	I0729 04:26:31.282053    5629 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/embed-certs-022000/config.json: {Name:mk65cc24d468a169044b91239e5ed9099355af17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:26:31.282274    5629 start.go:360] acquireMachinesLock for embed-certs-022000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:31.282307    5629 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "embed-certs-022000"
	I0729 04:26:31.282318    5629 start.go:93] Provisioning new machine with config: &{Name:embed-certs-022000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:26:31.282353    5629 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:26:31.289873    5629 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:26:31.307566    5629 start.go:159] libmachine.API.Create for "embed-certs-022000" (driver="qemu2")
	I0729 04:26:31.307599    5629 client.go:168] LocalClient.Create starting
	I0729 04:26:31.307668    5629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:26:31.307696    5629 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:31.307708    5629 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:31.307747    5629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:26:31.307769    5629 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:31.307778    5629 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:31.308229    5629 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:26:31.461520    5629 main.go:141] libmachine: Creating SSH key...
	I0729 04:26:31.516201    5629 main.go:141] libmachine: Creating Disk image...
	I0729 04:26:31.516206    5629 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:26:31.516379    5629 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2
	I0729 04:26:31.525534    5629 main.go:141] libmachine: STDOUT: 
	I0729 04:26:31.525549    5629 main.go:141] libmachine: STDERR: 
	I0729 04:26:31.525589    5629 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2 +20000M
	I0729 04:26:31.533339    5629 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:26:31.533349    5629 main.go:141] libmachine: STDERR: 
	I0729 04:26:31.533365    5629 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2
	I0729 04:26:31.533367    5629 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:26:31.533379    5629 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:31.533413    5629 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:b4:0f:3f:bc:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2
	I0729 04:26:31.535042    5629 main.go:141] libmachine: STDOUT: 
	I0729 04:26:31.535059    5629 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:31.535080    5629 client.go:171] duration metric: took 227.483458ms to LocalClient.Create
	I0729 04:26:33.537185    5629 start.go:128] duration metric: took 2.254888125s to createHost
	I0729 04:26:33.537239    5629 start.go:83] releasing machines lock for "embed-certs-022000", held for 2.254995458s
	W0729 04:26:33.537361    5629 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:33.554830    5629 out.go:177] * Deleting "embed-certs-022000" in qemu2 ...
	W0729 04:26:33.608515    5629 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:33.608554    5629 start.go:729] Will try again in 5 seconds ...
	I0729 04:26:38.610651    5629 start.go:360] acquireMachinesLock for embed-certs-022000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:38.611186    5629 start.go:364] duration metric: took 416.333µs to acquireMachinesLock for "embed-certs-022000"
	I0729 04:26:38.611313    5629 start.go:93] Provisioning new machine with config: &{Name:embed-certs-022000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:26:38.611601    5629 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:26:38.621245    5629 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:26:38.674943    5629 start.go:159] libmachine.API.Create for "embed-certs-022000" (driver="qemu2")
	I0729 04:26:38.674995    5629 client.go:168] LocalClient.Create starting
	I0729 04:26:38.675122    5629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/ca.pem
	I0729 04:26:38.675187    5629 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:38.675206    5629 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:38.675264    5629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19336-945/.minikube/certs/cert.pem
	I0729 04:26:38.675307    5629 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:38.675320    5629 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:38.675853    5629 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19336-945/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:26:38.841084    5629 main.go:141] libmachine: Creating SSH key...
	I0729 04:26:39.043514    5629 main.go:141] libmachine: Creating Disk image...
	I0729 04:26:39.043521    5629 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:26:39.043716    5629 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2.raw /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2
	I0729 04:26:39.053566    5629 main.go:141] libmachine: STDOUT: 
	I0729 04:26:39.053595    5629 main.go:141] libmachine: STDERR: 
	I0729 04:26:39.053647    5629 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2 +20000M
	I0729 04:26:39.061803    5629 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:26:39.061817    5629 main.go:141] libmachine: STDERR: 
	I0729 04:26:39.061829    5629 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2
	I0729 04:26:39.061833    5629 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:26:39.061842    5629 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:39.061894    5629 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:59:91:ab:c6:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2
	I0729 04:26:39.063625    5629 main.go:141] libmachine: STDOUT: 
	I0729 04:26:39.063641    5629 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:39.063656    5629 client.go:171] duration metric: took 388.667167ms to LocalClient.Create
	I0729 04:26:41.064804    5629 start.go:128] duration metric: took 2.453242125s to createHost
	I0729 04:26:41.064835    5629 start.go:83] releasing machines lock for "embed-certs-022000", held for 2.453707167s
	W0729 04:26:41.065009    5629 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:41.073403    5629 out.go:177] 
	W0729 04:26:41.077465    5629 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:41.077483    5629 out.go:239] * 
	* 
	W0729 04:26:41.079032    5629 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:26:41.090373    5629 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-022000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (64.003583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-469000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000: exit status 7 (28.602042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-469000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-469000 --alsologtostderr -v=1: exit status 83 (40.107958ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-469000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-469000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:33.767590    5646 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:33.767729    5646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:33.767732    5646 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:33.767735    5646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:33.767859    5646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:33.768078    5646 out.go:298] Setting JSON to false
	I0729 04:26:33.768085    5646 mustload.go:65] Loading cluster: newest-cni-469000
	I0729 04:26:33.768298    5646 config.go:182] Loaded profile config "newest-cni-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 04:26:33.771919    5646 out.go:177] * The control-plane node newest-cni-469000 host is not running: state=Stopped
	I0729 04:26:33.776063    5646 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-469000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-469000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000: exit status 7 (28.407292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-469000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000: exit status 7 (29.229292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-022000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-022000 create -f testdata/busybox.yaml: exit status 1 (31.962292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-022000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-022000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (29.083833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-022000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (28.407583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-022000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-022000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-022000 describe deploy/metrics-server -n kube-system: exit status 1 (26.577792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-022000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-022000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (28.86975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-022000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-022000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.18333525s)

                                                
                                                
-- stdout --
	* [embed-certs-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-022000" primary control-plane node in "embed-certs-022000" cluster
	* Restarting existing qemu2 VM for "embed-certs-022000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-022000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:44.708733    5709 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:44.708870    5709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:44.708874    5709 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:44.708876    5709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:44.709011    5709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:44.710026    5709 out.go:298] Setting JSON to false
	I0729 04:26:44.726355    5709 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3367,"bootTime":1722249037,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 04:26:44.726430    5709 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:26:44.733423    5709 out.go:177] * [embed-certs-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:26:44.740308    5709 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 04:26:44.740340    5709 notify.go:220] Checking for updates...
	I0729 04:26:44.745785    5709 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 04:26:44.749235    5709 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:26:44.752342    5709 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:26:44.755318    5709 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 04:26:44.758317    5709 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:26:44.761584    5709 config.go:182] Loaded profile config "embed-certs-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:44.761845    5709 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:26:44.766254    5709 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:26:44.773268    5709 start.go:297] selected driver: qemu2
	I0729 04:26:44.773275    5709 start.go:901] validating driver "qemu2" against &{Name:embed-certs-022000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:44.773336    5709 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:26:44.775682    5709 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:26:44.775704    5709 cni.go:84] Creating CNI manager for ""
	I0729 04:26:44.775711    5709 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:26:44.775738    5709 start.go:340] cluster config:
	{Name:embed-certs-022000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-022000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:44.779383    5709 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:44.787241    5709 out.go:177] * Starting "embed-certs-022000" primary control-plane node in "embed-certs-022000" cluster
	I0729 04:26:44.791264    5709 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:26:44.791280    5709 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:26:44.791293    5709 cache.go:56] Caching tarball of preloaded images
	I0729 04:26:44.791352    5709 preload.go:172] Found /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:26:44.791357    5709 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:26:44.791418    5709 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/embed-certs-022000/config.json ...
	I0729 04:26:44.791828    5709 start.go:360] acquireMachinesLock for embed-certs-022000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:44.791856    5709 start.go:364] duration metric: took 21.875µs to acquireMachinesLock for "embed-certs-022000"
	I0729 04:26:44.791865    5709 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:44.791872    5709 fix.go:54] fixHost starting: 
	I0729 04:26:44.791985    5709 fix.go:112] recreateIfNeeded on embed-certs-022000: state=Stopped err=<nil>
	W0729 04:26:44.791992    5709 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:44.800275    5709 out.go:177] * Restarting existing qemu2 VM for "embed-certs-022000" ...
	I0729 04:26:44.803238    5709 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:44.803286    5709 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:59:91:ab:c6:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2
	I0729 04:26:44.805327    5709 main.go:141] libmachine: STDOUT: 
	I0729 04:26:44.805347    5709 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:44.805383    5709 fix.go:56] duration metric: took 13.513042ms for fixHost
	I0729 04:26:44.805392    5709 start.go:83] releasing machines lock for "embed-certs-022000", held for 13.53275ms
	W0729 04:26:44.805399    5709 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:44.805427    5709 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:44.805432    5709 start.go:729] Will try again in 5 seconds ...
	I0729 04:26:49.807427    5709 start.go:360] acquireMachinesLock for embed-certs-022000: {Name:mkb8a255ae6a5026ee7133df87e20d3057cee91b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:49.807894    5709 start.go:364] duration metric: took 355.416µs to acquireMachinesLock for "embed-certs-022000"
	I0729 04:26:49.808014    5709 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:49.808035    5709 fix.go:54] fixHost starting: 
	I0729 04:26:49.808764    5709 fix.go:112] recreateIfNeeded on embed-certs-022000: state=Stopped err=<nil>
	W0729 04:26:49.808795    5709 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:49.817140    5709 out.go:177] * Restarting existing qemu2 VM for "embed-certs-022000" ...
	I0729 04:26:49.821225    5709 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:49.821425    5709 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:59:91:ab:c6:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19336-945/.minikube/machines/embed-certs-022000/disk.qcow2
	I0729 04:26:49.830438    5709 main.go:141] libmachine: STDOUT: 
	I0729 04:26:49.830494    5709 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:49.830568    5709 fix.go:56] duration metric: took 22.535875ms for fixHost
	I0729 04:26:49.830610    5709 start.go:83] releasing machines lock for "embed-certs-022000", held for 22.694708ms
	W0729 04:26:49.830800    5709 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-022000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-022000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:49.838235    5709 out.go:177] 
	W0729 04:26:49.842261    5709 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:49.842311    5709 out.go:239] * 
	* 
	W0729 04:26:49.844579    5709 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:26:49.852141    5709 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-022000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (67.030958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-022000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (31.966917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-022000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-022000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-022000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.050709ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-022000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-022000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (29.049583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-022000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (29.147542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-022000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-022000 --alsologtostderr -v=1: exit status 83 (41.354083ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-022000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-022000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:50.119301    5730 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:50.119451    5730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:50.119454    5730 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:50.119456    5730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:50.119585    5730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 04:26:50.119821    5730 out.go:298] Setting JSON to false
	I0729 04:26:50.119828    5730 mustload.go:65] Loading cluster: embed-certs-022000
	I0729 04:26:50.120025    5730 config.go:182] Loaded profile config "embed-certs-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:50.124041    5730 out.go:177] * The control-plane node embed-certs-022000 host is not running: state=Stopped
	I0729 04:26:50.128069    5730 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-022000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-022000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (29.003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-022000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (28.868292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 9.69
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.1
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 9.15
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 205.52
38 TestAddons/serial/Volcano 38.95
40 TestAddons/serial/GCPAuth/Namespaces 0.07
42 TestAddons/parallel/Registry 13.77
43 TestAddons/parallel/Ingress 19.66
44 TestAddons/parallel/InspektorGadget 10.22
45 TestAddons/parallel/MetricsServer 5.28
48 TestAddons/parallel/CSI 52.38
49 TestAddons/parallel/Headlamp 16.53
50 TestAddons/parallel/CloudSpanner 5.17
51 TestAddons/parallel/LocalPath 39.73
52 TestAddons/parallel/NvidiaDevicePlugin 5.15
53 TestAddons/parallel/Yakd 11.2
54 TestAddons/StoppedEnableDisable 12.38
62 TestHyperKitDriverInstallOrUpdate 10.12
65 TestErrorSpam/setup 34.32
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.65
69 TestErrorSpam/unpause 0.58
70 TestErrorSpam/stop 55.29
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 51.44
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 37.57
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.48
82 TestFunctional/serial/CacheCmd/cache/add_local 1.13
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 62.58
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.68
93 TestFunctional/serial/LogsFileCmd 0.7
94 TestFunctional/serial/InvalidService 3.54
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 8.29
98 TestFunctional/parallel/DryRun 0.25
99 TestFunctional/parallel/InternationalLanguage 0.12
100 TestFunctional/parallel/StatusCmd 0.25
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 25.82
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.42
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.45
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
120 TestFunctional/parallel/License 0.22
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.56
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/ServiceCmd/DeployApp 7.08
133 TestFunctional/parallel/ServiceCmd/List 0.28
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
136 TestFunctional/parallel/ServiceCmd/Format 0.1
137 TestFunctional/parallel/ServiceCmd/URL 0.1
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
139 TestFunctional/parallel/ProfileCmd/profile_list 0.12
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
141 TestFunctional/parallel/MountCmd/any-port 4.96
142 TestFunctional/parallel/MountCmd/specific-port 0.91
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
144 TestFunctional/parallel/Version/short 0.04
145 TestFunctional/parallel/Version/components 0.15
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.09
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
150 TestFunctional/parallel/ImageCommands/ImageBuild 1.61
151 TestFunctional/parallel/ImageCommands/Setup 1.71
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.76
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.59
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.21
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.2
159 TestFunctional/parallel/DockerEnv/bash 0.26
160 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
161 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
162 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 195.76
170 TestMultiControlPlane/serial/DeployApp 4.41
171 TestMultiControlPlane/serial/PingHostFromPods 0.75
172 TestMultiControlPlane/serial/AddWorkerNode 58.02
173 TestMultiControlPlane/serial/NodeLabels 0.18
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
175 TestMultiControlPlane/serial/CopyFile 4.4
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 79.74
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.04
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 3.13
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
221 TestMainNoArgs 0.03
268 TestStoppedBinaryUpgrade/Setup 0.89
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
285 TestNoKubernetes/serial/ProfileList 31.34
286 TestNoKubernetes/serial/Stop 2.58
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
298 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
303 TestStartStop/group/old-k8s-version/serial/Stop 1.8
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
314 TestStartStop/group/no-preload/serial/Stop 1.91
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.07
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.11
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
332 TestStartStop/group/newest-cni/serial/Stop 1.82
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/embed-certs/serial/Stop 3.19
348 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-388000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-388000: exit status 85 (92.922125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-388000 | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT |          |
	|         | -p download-only-388000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:34:20
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:34:20.971944    1399 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:34:20.972086    1399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:34:20.972089    1399 out.go:304] Setting ErrFile to fd 2...
	I0729 03:34:20.972092    1399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:34:20.972227    1399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	W0729 03:34:20.972311    1399 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19336-945/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19336-945/.minikube/config/config.json: no such file or directory
	I0729 03:34:20.973542    1399 out.go:298] Setting JSON to true
	I0729 03:34:20.990548    1399 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":223,"bootTime":1722249037,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 03:34:20.990682    1399 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:34:20.996606    1399 out.go:97] [download-only-388000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:34:20.996789    1399 notify.go:220] Checking for updates...
	W0729 03:34:20.996812    1399 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 03:34:21.000282    1399 out.go:169] MINIKUBE_LOCATION=19336
	I0729 03:34:21.003533    1399 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 03:34:21.008522    1399 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:34:21.009975    1399 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:34:21.013456    1399 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	W0729 03:34:21.019470    1399 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:34:21.019670    1399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:34:21.024463    1399 out.go:97] Using the qemu2 driver based on user configuration
	I0729 03:34:21.024482    1399 start.go:297] selected driver: qemu2
	I0729 03:34:21.024495    1399 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:34:21.024565    1399 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:34:21.028451    1399 out.go:169] Automatically selected the socket_vmnet network
	I0729 03:34:21.034132    1399 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 03:34:21.034224    1399 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:34:21.034295    1399 cni.go:84] Creating CNI manager for ""
	I0729 03:34:21.034313    1399 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 03:34:21.034359    1399 start.go:340] cluster config:
	{Name:download-only-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-388000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:34:21.039586    1399 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:34:21.044459    1399 out.go:97] Downloading VM boot image ...
	I0729 03:34:21.044476    1399 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 03:34:26.586055    1399 out.go:97] Starting "download-only-388000" primary control-plane node in "download-only-388000" cluster
	I0729 03:34:26.586075    1399 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:34:26.645831    1399 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:34:26.645841    1399 cache.go:56] Caching tarball of preloaded images
	I0729 03:34:26.645999    1399 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:34:26.650108    1399 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 03:34:26.650115    1399 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:34:26.723389    1399 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:34:32.935456    1399 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:34:32.935620    1399 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:34:33.631853    1399 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 03:34:33.632056    1399 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/download-only-388000/config.json ...
	I0729 03:34:33.632075    1399 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/download-only-388000/config.json: {Name:mk0e53c4345a3115807d78af2fad3c40d51e0602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:34:33.632292    1399 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:34:33.632495    1399 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 03:34:34.168468    1399 out.go:169] 
	W0729 03:34:34.174307    1399 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19336-945/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60 0x1085c1a60] Decompressors:map[bz2:0x14000504db0 gz:0x14000504db8 tar:0x14000504d60 tar.bz2:0x14000504d70 tar.gz:0x14000504d80 tar.xz:0x14000504d90 tar.zst:0x14000504da0 tbz2:0x14000504d70 tgz:0x14000504d80 txz:0x14000504d90 tzst:0x14000504da0 xz:0x14000504dc0 zip:0x14000504dd0 zst:0x14000504dc8] Getters:map[file:0x140018825f0 http:0x140006bc230 https:0x140006bc280] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 03:34:34.174331    1399 out_reason.go:110] 
	W0729 03:34:34.182376    1399 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:34:34.186382    1399 out.go:169] 
	
	
	* The control-plane node download-only-388000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-388000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-388000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (9.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-397000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-397000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (9.6905975s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (9.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-397000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-397000: exit status 85 (79.115958ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-388000 | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT |                     |
	|         | -p download-only-388000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT | 29 Jul 24 03:34 PDT |
	| delete  | -p download-only-388000        | download-only-388000 | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT | 29 Jul 24 03:34 PDT |
	| start   | -o=json --download-only        | download-only-397000 | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT |                     |
	|         | -p download-only-397000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:34:34
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:34:34.586933    1423 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:34:34.587070    1423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:34:34.587073    1423 out.go:304] Setting ErrFile to fd 2...
	I0729 03:34:34.587076    1423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:34:34.587202    1423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 03:34:34.588228    1423 out.go:298] Setting JSON to true
	I0729 03:34:34.604048    1423 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":237,"bootTime":1722249037,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 03:34:34.604120    1423 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:34:34.608584    1423 out.go:97] [download-only-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:34:34.608693    1423 notify.go:220] Checking for updates...
	I0729 03:34:34.612616    1423 out.go:169] MINIKUBE_LOCATION=19336
	I0729 03:34:34.615550    1423 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 03:34:34.619621    1423 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:34:34.622622    1423 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:34:34.625618    1423 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	W0729 03:34:34.631553    1423 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:34:34.631691    1423 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:34:34.634583    1423 out.go:97] Using the qemu2 driver based on user configuration
	I0729 03:34:34.634599    1423 start.go:297] selected driver: qemu2
	I0729 03:34:34.634605    1423 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:34:34.634661    1423 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:34:34.637602    1423 out.go:169] Automatically selected the socket_vmnet network
	I0729 03:34:34.642590    1423 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 03:34:34.642668    1423 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:34:34.642685    1423 cni.go:84] Creating CNI manager for ""
	I0729 03:34:34.642697    1423 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:34:34.642702    1423 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:34:34.642752    1423 start.go:340] cluster config:
	{Name:download-only-397000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:34:34.645990    1423 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:34:34.649564    1423 out.go:97] Starting "download-only-397000" primary control-plane node in "download-only-397000" cluster
	I0729 03:34:34.649571    1423 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:34:34.697520    1423 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:34:34.697536    1423 cache.go:56] Caching tarball of preloaded images
	I0729 03:34:34.697684    1423 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:34:34.702565    1423 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 03:34:34.702572    1423 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:34:34.781513    1423 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-397000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-397000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-397000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (9.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-706000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-706000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (9.145430834s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (9.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-706000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-706000: exit status 85 (77.944209ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-388000 | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT |                     |
	|         | -p download-only-388000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT | 29 Jul 24 03:34 PDT |
	| delete  | -p download-only-388000             | download-only-388000 | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT | 29 Jul 24 03:34 PDT |
	| start   | -o=json --download-only             | download-only-397000 | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT |                     |
	|         | -p download-only-397000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT | 29 Jul 24 03:34 PDT |
	| delete  | -p download-only-397000             | download-only-397000 | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT | 29 Jul 24 03:34 PDT |
	| start   | -o=json --download-only             | download-only-706000 | jenkins | v1.33.1 | 29 Jul 24 03:34 PDT |                     |
	|         | -p download-only-706000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:34:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:34:44.559296    1448 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:34:44.559432    1448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:34:44.559436    1448 out.go:304] Setting ErrFile to fd 2...
	I0729 03:34:44.559438    1448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:34:44.559574    1448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 03:34:44.560675    1448 out.go:298] Setting JSON to true
	I0729 03:34:44.576483    1448 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":247,"bootTime":1722249037,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 03:34:44.576553    1448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:34:44.580850    1448 out.go:97] [download-only-706000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:34:44.580962    1448 notify.go:220] Checking for updates...
	I0729 03:34:44.584853    1448 out.go:169] MINIKUBE_LOCATION=19336
	I0729 03:34:44.587840    1448 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 03:34:44.591899    1448 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:34:44.594903    1448 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:34:44.597897    1448 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	W0729 03:34:44.603844    1448 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:34:44.604017    1448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:34:44.605140    1448 out.go:97] Using the qemu2 driver based on user configuration
	I0729 03:34:44.605148    1448 start.go:297] selected driver: qemu2
	I0729 03:34:44.605152    1448 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:34:44.605197    1448 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:34:44.607885    1448 out.go:169] Automatically selected the socket_vmnet network
	I0729 03:34:44.612928    1448 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 03:34:44.613020    1448 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:34:44.613039    1448 cni.go:84] Creating CNI manager for ""
	I0729 03:34:44.613047    1448 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:34:44.613053    1448 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:34:44.613093    1448 start.go:340] cluster config:
	{Name:download-only-706000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:34:44.616419    1448 iso.go:125] acquiring lock: {Name:mkc2f8b6b613e612067c34d522bb9afa15f6411b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:34:44.619946    1448 out.go:97] Starting "download-only-706000" primary control-plane node in "download-only-706000" cluster
	I0729 03:34:44.619955    1448 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:34:44.675203    1448 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 03:34:44.675219    1448 cache.go:56] Caching tarball of preloaded images
	I0729 03:34:44.675388    1448 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:34:44.679593    1448 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 03:34:44.679600    1448 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:34:44.758090    1448 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19336-945/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-706000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-706000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-706000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-681000 --alsologtostderr --binary-mirror http://127.0.0.1:49323 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-681000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-681000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-867000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-867000: exit status 85 (59.321834ms)

                                                
                                                
-- stdout --
	* Profile "addons-867000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-867000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-867000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-867000: exit status 85 (55.356292ms)

                                                
                                                
-- stdout --
	* Profile "addons-867000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-867000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (205.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-867000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-867000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m25.516026959s)
--- PASS: TestAddons/Setup (205.52s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.95s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 8.89175ms
addons_test.go:897: volcano-scheduler stabilized in 8.924125ms
addons_test.go:905: volcano-admission stabilized in 8.975167ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-v2w9m" [58e6ba11-614b-4d13-9ac7-b515b999467f] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003833125s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-j2v6w" [7e27c535-094a-4c0b-a18f-f6f8dc5ebca4] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003567291s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-2tvql" [ab7ae7fd-d4fa-4435-a07c-c824e6b57034] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002626667s
addons_test.go:932: (dbg) Run:  kubectl --context addons-867000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-867000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-867000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5e2a6f4b-2dae-4e1e-a120-e1c242ea164e] Pending
helpers_test.go:344: "test-job-nginx-0" [5e2a6f4b-2dae-4e1e-a120-e1c242ea164e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5e2a6f4b-2dae-4e1e-a120-e1c242ea164e] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.00400025s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-867000 addons disable volcano --alsologtostderr -v=1: (9.718709541s)
--- PASS: TestAddons/serial/Volcano (38.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-867000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-867000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.24875ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-64swm" [79c74171-af9f-4bf5-9525-885acab035e7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004151125s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ptnb6" [db481822-2ef8-4411-8201-28e4f6975698] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003896167s
addons_test.go:342: (dbg) Run:  kubectl --context addons-867000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-867000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-867000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.467787958s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 ip
2024/07/29 03:39:29 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.77s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-867000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-867000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-867000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bf3681a7-4245-4848-a3de-ede2052ffc23] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bf3681a7-4245-4848-a3de-ede2052ffc23] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004101833s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-867000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-867000 addons disable ingress --alsologtostderr -v=1: (7.201155417s)
--- PASS: TestAddons/parallel/Ingress (19.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-24pmm" [db03242f-6124-4ad6-afdf-3050431ba936] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003845041s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-867000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-867000: (5.21778125s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.378167ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-rvks5" [85782bc4-7ac1-4980-94fb-8de3e007e399] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0044565s
addons_test.go:417: (dbg) Run:  kubectl --context addons-867000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.505084ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-867000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-867000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ddbe683b-ddd1-437a-93ef-b0ec00ce66f8] Pending
helpers_test.go:344: "task-pv-pod" [ddbe683b-ddd1-437a-93ef-b0ec00ce66f8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ddbe683b-ddd1-437a-93ef-b0ec00ce66f8] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003755375s
addons_test.go:590: (dbg) Run:  kubectl --context addons-867000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-867000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-867000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-867000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-867000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-867000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-867000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b2dd321b-e682-4bec-b33d-49dcdc24ccea] Pending
helpers_test.go:344: "task-pv-pod-restore" [b2dd321b-e682-4bec-b33d-49dcdc24ccea] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b2dd321b-e682-4bec-b33d-49dcdc24ccea] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003819375s
addons_test.go:632: (dbg) Run:  kubectl --context addons-867000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-867000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-867000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-867000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.08652475s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-867000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-r5cvd" [2fe5131b-0c21-492e-89ba-c6de32e5b932] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-r5cvd" [2fe5131b-0c21-492e-89ba-c6de32e5b932] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003916125s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-867000 addons disable headlamp --alsologtostderr -v=1: (5.190942625s)
--- PASS: TestAddons/parallel/Headlamp (16.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-w2xtm" [0eaf5f30-7c0a-4dee-9800-052d2da7c368] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004255458s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-867000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (39.73s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-867000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-867000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ff956f70-065e-42ff-a189-fef9d05e6c6a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ff956f70-065e-42ff-a189-fef9d05e6c6a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ff956f70-065e-42ff-a189-fef9d05e6c6a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003729375s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-867000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 ssh "cat /opt/local-path-provisioner/pvc-f04aea57-5de3-4895-852d-a421e1bfb95c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-867000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-867000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-867000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.281389209s)
--- PASS: TestAddons/parallel/LocalPath (39.73s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4rwtm" [fa3ed83f-f003-4827-ae42-ff3290c778e0] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004440083s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-867000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-vvcnd" [538f60a1-d3a0-42f2-80ac-b55cfc73c075] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00385775s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-867000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-867000 addons disable yakd --alsologtostderr -v=1: (5.198183541s)
--- PASS: TestAddons/parallel/Yakd (11.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-867000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-867000: (12.196978208s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-867000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-867000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-867000
--- PASS: TestAddons/StoppedEnableDisable (12.38s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.12s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.12s)

                                                
                                    
x
+
TestErrorSpam/setup (34.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-638000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-638000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 --driver=qemu2 : (34.315147459s)
--- PASS: TestErrorSpam/setup (34.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (55.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 stop: (3.197141042s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 stop: (26.061643s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-638000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-638000 stop: (26.028492s)
--- PASS: TestErrorSpam/stop (55.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19336-945/.minikube/files/etc/test/nested/copy/1397/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-727000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0729 03:43:20.168803    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:20.175582    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:20.187620    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:20.209672    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:20.251718    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:20.333823    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:20.495984    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:20.818079    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:21.460177    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:22.742328    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-727000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (51.442654417s)
--- PASS: TestFunctional/serial/StartWithProxy (51.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-727000 --alsologtostderr -v=8
E0729 03:43:25.304566    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:30.426782    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:43:40.669131    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-727000 --alsologtostderr -v=8: (37.566112416s)
functional_test.go:659: soft start took 37.566472417s for "functional-727000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-727000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 cache add registry.k8s.io/pause:3.1
E0729 03:44:01.151199    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local811479599/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 cache add minikube-local-cache-test:functional-727000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 cache delete minikube-local-cache-test:functional-727000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-727000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-727000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (67.608709ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 kubectl -- --context functional-727000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-727000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-727000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 03:44:42.112968    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-727000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.579724958s)
functional_test.go:757: restart took 1m2.579834833s for "functional-727000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (62.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-727000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd4076694268/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-727000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-727000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-727000: exit status 115 (103.825667ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31322 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-727000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-727000 config get cpus: exit status 14 (31.13325ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-727000 config get cpus: exit status 14 (29.692625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-727000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-727000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2166: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-727000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-727000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (133.797917ms)

                                                
                                                
-- stdout --
	* [functional-727000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:56.831246    2137 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:56.831427    2137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:56.831430    2137 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:56.831432    2137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:56.831599    2137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 03:45:56.832733    2137 out.go:298] Setting JSON to false
	I0729 03:45:56.849757    2137 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":919,"bootTime":1722249037,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 03:45:56.849814    2137 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:45:56.854633    2137 out.go:177] * [functional-727000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:45:56.861552    2137 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 03:45:56.861623    2137 notify.go:220] Checking for updates...
	I0729 03:45:56.868480    2137 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 03:45:56.871531    2137 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:45:56.877491    2137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:45:56.887411    2137 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 03:45:56.897574    2137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:45:56.900936    2137 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:56.901188    2137 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:45:56.905531    2137 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:45:56.911543    2137 start.go:297] selected driver: qemu2
	I0729 03:45:56.911552    2137 start.go:901] validating driver "qemu2" against &{Name:functional-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:56.911606    2137 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:45:56.918572    2137 out.go:177] 
	W0729 03:45:56.921492    2137 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 03:45:56.925553    2137 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-727000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-727000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-727000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (117.576875ms)

                                                
                                                
-- stdout --
	* [functional-727000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:56.706296    2131 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:56.706394    2131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:56.706397    2131 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:56.706399    2131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:56.706536    2131 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
	I0729 03:45:56.708062    2131 out.go:298] Setting JSON to false
	I0729 03:45:56.726968    2131 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":919,"bootTime":1722249037,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0729 03:45:56.727059    2131 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:45:56.731542    2131 out.go:177] * [functional-727000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0729 03:45:56.739529    2131 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 03:45:56.739652    2131 notify.go:220] Checking for updates...
	I0729 03:45:56.747585    2131 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	I0729 03:45:56.751532    2131 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:45:56.758475    2131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:45:56.761584    2131 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	I0729 03:45:56.764584    2131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:45:56.767822    2131 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:56.768068    2131 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:45:56.772559    2131 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0729 03:45:56.779521    2131 start.go:297] selected driver: qemu2
	I0729 03:45:56.779530    2131 start.go:901] validating driver "qemu2" against &{Name:functional-727000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-727000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:56.779582    2131 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:45:56.785589    2131 out.go:177] 
	W0729 03:45:56.789385    2131 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 03:45:56.792566    2131 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0aae2c8c-d0d5-4353-9079-12f26ea44af1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004344792s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-727000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-727000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-727000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-727000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0a40cf75-8ea7-49e5-a50c-8ef94f756a7f] Pending
helpers_test.go:344: "sp-pod" [0a40cf75-8ea7-49e5-a50c-8ef94f756a7f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0a40cf75-8ea7-49e5-a50c-8ef94f756a7f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003698625s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-727000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-727000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-727000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9059a09e-eff0-4bac-9aa1-976e1063ebdf] Pending
helpers_test.go:344: "sp-pod" [9059a09e-eff0-4bac-9aa1-976e1063ebdf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9059a09e-eff0-4bac-9aa1-976e1063ebdf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003786958s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-727000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh -n functional-727000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 cp functional-727000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd794684033/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh -n functional-727000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh -n functional-727000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1397/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "sudo cat /etc/test/nested/copy/1397/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1397.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "sudo cat /etc/ssl/certs/1397.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1397.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "sudo cat /usr/share/ca-certificates/1397.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/13972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "sudo cat /etc/ssl/certs/13972.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/13972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "sudo cat /usr/share/ca-certificates/13972.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-727000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-727000 ssh "sudo systemctl is-active crio": exit status 1 (65.29ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-727000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-727000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-727000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1974: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-727000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-727000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-727000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8bd3fb48-0128-45d6-88cb-1fe8c4f07bdd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8bd3fb48-0128-45d6-88cb-1fe8c4f07bdd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003498666s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-727000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.0.13 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-727000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-727000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-727000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-sdbq4" [ed52e523-f795-4a0e-aeba-50edd874fcd2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-sdbq4" [ed52e523-f795-4a0e-aeba-50edd874fcd2] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.0036645s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 service list -o json
functional_test.go:1490: Took "276.876792ms" to run "out/minikube-darwin-arm64 -p functional-727000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:31192
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:31192
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "83.512709ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.571042ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "82.5925ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.507292ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1791735349/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722249948977670000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1791735349/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722249948977670000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1791735349/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722249948977670000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1791735349/001/test-1722249948977670000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.245959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 10:45 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 10:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 10:45 test-1722249948977670000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh cat /mount-9p/test-1722249948977670000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-727000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [27684f43-5092-4b8b-a84f-4a3bdb2a8df4] Pending
helpers_test.go:344: "busybox-mount" [27684f43-5092-4b8b-a84f-4a3bdb2a8df4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [27684f43-5092-4b8b-a84f-4a3bdb2a8df4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [27684f43-5092-4b8b-a84f-4a3bdb2a8df4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003921375s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-727000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1791735349/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1983177758/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.089375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1983177758/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-727000 ssh "sudo umount -f /mount-9p": exit status 1 (59.9325ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-727000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1983177758/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1296640856/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1296640856/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1296640856/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T" /mount1: exit status 1 (67.119291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T" /mount2: exit status 1 (54.131709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-727000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1296640856/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1296640856/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-727000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1296640856/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-727000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-727000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-727000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-727000 image ls --format short --alsologtostderr:
I0729 03:46:03.385228    2288 out.go:291] Setting OutFile to fd 1 ...
I0729 03:46:03.385366    2288 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:46:03.385370    2288 out.go:304] Setting ErrFile to fd 2...
I0729 03:46:03.385373    2288 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:46:03.385501    2288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
I0729 03:46:03.385949    2288 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:46:03.386016    2288 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:46:03.386884    2288 ssh_runner.go:195] Run: systemctl --version
I0729 03:46:03.386892    2288 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
I0729 03:46:03.412055    2288 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-727000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-727000 | 2b499d30630e8 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kicbase/echo-server               | functional-727000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-727000 image ls --format table --alsologtostderr:
I0729 03:46:03.628540    2294 out.go:291] Setting OutFile to fd 1 ...
I0729 03:46:03.628684    2294 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:46:03.628687    2294 out.go:304] Setting ErrFile to fd 2...
I0729 03:46:03.628690    2294 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:46:03.628833    2294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
I0729 03:46:03.629258    2294 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:46:03.629317    2294 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:46:03.630141    2294 ssh_runner.go:195] Run: systemctl --version
I0729 03:46:03.630150    2294 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
I0729 03:46:03.657111    2294 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-727000 image ls --format json --alsologtostderr:
[{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-727000"],"size":"4780000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f53
3d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone
\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"2b499d30630e89f44e8d2efc90ffe39a223d54912baf5e72eebe134169470ea2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-727000"],"size":"30"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-727000 image ls --format json --alsologtostderr:
I0729 03:46:03.539920    2292 out.go:291] Setting OutFile to fd 1 ...
I0729 03:46:03.540088    2292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:46:03.540092    2292 out.go:304] Setting ErrFile to fd 2...
I0729 03:46:03.540095    2292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:46:03.540233    2292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
I0729 03:46:03.540654    2292 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:46:03.540751    2292 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:46:03.541697    2292 ssh_runner.go:195] Run: systemctl --version
I0729 03:46:03.541709    2292 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
I0729 03:46:03.570777    2292 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-727000 image ls --format yaml --alsologtostderr:
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 2b499d30630e89f44e8d2efc90ffe39a223d54912baf5e72eebe134169470ea2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-727000
size: "30"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-727000
size: "4780000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-727000 image ls --format yaml --alsologtostderr:
I0729 03:46:03.452834    2290 out.go:291] Setting OutFile to fd 1 ...
I0729 03:46:03.452993    2290 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:46:03.452996    2290 out.go:304] Setting ErrFile to fd 2...
I0729 03:46:03.452998    2290 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:46:03.453129    2290 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
I0729 03:46:03.453531    2290 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:46:03.453595    2290 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:46:03.454362    2290 ssh_runner.go:195] Run: systemctl --version
I0729 03:46:03.454370    2290 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
I0729 03:46:03.479178    2290 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-727000 ssh pgrep buildkitd: exit status 1 (57.798209ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image build -t localhost/my-image:functional-727000 testdata/build --alsologtostderr
E0729 03:46:04.034233    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
2024/07/29 03:46:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-727000 image build -t localhost/my-image:functional-727000 testdata/build --alsologtostderr: (1.4777855s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-727000 image build -t localhost/my-image:functional-727000 testdata/build --alsologtostderr:
I0729 03:46:03.758983    2298 out.go:291] Setting OutFile to fd 1 ...
I0729 03:46:03.759201    2298 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:46:03.759204    2298 out.go:304] Setting ErrFile to fd 2...
I0729 03:46:03.759206    2298 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:46:03.759327    2298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19336-945/.minikube/bin
I0729 03:46:03.759830    2298 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:46:03.760541    2298 config.go:182] Loaded profile config "functional-727000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:46:03.761393    2298 ssh_runner.go:195] Run: systemctl --version
I0729 03:46:03.761401    2298 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19336-945/.minikube/machines/functional-727000/id_rsa Username:docker}
I0729 03:46:03.785509    2298 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4144281331.tar
I0729 03:46:03.785571    2298 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 03:46:03.789469    2298 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4144281331.tar
I0729 03:46:03.791162    2298 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4144281331.tar: stat -c "%s %y" /var/lib/minikube/build/build.4144281331.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4144281331.tar': No such file or directory
I0729 03:46:03.791177    2298 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4144281331.tar --> /var/lib/minikube/build/build.4144281331.tar (3072 bytes)
I0729 03:46:03.800475    2298 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4144281331
I0729 03:46:03.806599    2298 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4144281331 -xf /var/lib/minikube/build/build.4144281331.tar
I0729 03:46:03.810970    2298 docker.go:360] Building image: /var/lib/minikube/build/build.4144281331
I0729 03:46:03.811030    2298 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-727000 /var/lib/minikube/build/build.4144281331
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e707555833d4f2c5180ee6449063c4b93056ccfbc96bf200e6c38c1cccef3029 done
#8 naming to localhost/my-image:functional-727000 done
#8 DONE 0.0s
I0729 03:46:05.184850    2298 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-727000 /var/lib/minikube/build/build.4144281331: (1.373819541s)
I0729 03:46:05.184905    2298 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4144281331
I0729 03:46:05.188607    2298 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4144281331.tar
I0729 03:46:05.191763    2298 build_images.go:217] Built localhost/my-image:functional-727000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4144281331.tar
I0729 03:46:05.191774    2298 build_images.go:133] succeeded building to: functional-727000
I0729 03:46:05.191777    2298 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.690203s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-727000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image load --daemon docker.io/kicbase/echo-server:functional-727000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image load --daemon docker.io/kicbase/echo-server:functional-727000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-727000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image load --daemon docker.io/kicbase/echo-server:functional-727000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image save docker.io/kicbase/echo-server:functional-727000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image rm docker.io/kicbase/echo-server:functional-727000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-727000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 image save --daemon docker.io/kicbase/echo-server:functional-727000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-727000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-727000 docker-env) && out/minikube-darwin-arm64 status -p functional-727000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-727000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-727000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-727000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-727000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-727000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-714000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0729 03:48:20.134801    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 03:48:47.843191    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-714000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m15.568809083s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-714000 -- rollout status deployment/busybox: (2.874615333s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-j8vbr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-jspdp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-lg8b6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-j8vbr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-jspdp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-lg8b6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-j8vbr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-jspdp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-lg8b6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-j8vbr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-j8vbr -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-jspdp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-jspdp -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-lg8b6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-714000 -- exec busybox-fc5497c4f-lg8b6 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-714000 -v=7 --alsologtostderr
E0729 03:50:14.831023    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:50:14.837470    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:50:14.849542    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:50:14.871614    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:50:14.913712    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:50:14.994356    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:50:15.156455    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:50:15.478633    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:50:16.120728    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:50:17.402849    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 03:50:19.964909    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-714000 -v=7 --alsologtostderr: (57.78068325s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-714000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp testdata/cp-test.txt ha-714000:/home/docker/cp-test.txt
E0729 03:50:25.087054    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile7924515/001/cp-test_ha-714000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000:/home/docker/cp-test.txt ha-714000-m02:/home/docker/cp-test_ha-714000_ha-714000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m02 "sudo cat /home/docker/cp-test_ha-714000_ha-714000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000:/home/docker/cp-test.txt ha-714000-m03:/home/docker/cp-test_ha-714000_ha-714000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m03 "sudo cat /home/docker/cp-test_ha-714000_ha-714000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000:/home/docker/cp-test.txt ha-714000-m04:/home/docker/cp-test_ha-714000_ha-714000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m04 "sudo cat /home/docker/cp-test_ha-714000_ha-714000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp testdata/cp-test.txt ha-714000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile7924515/001/cp-test_ha-714000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m02:/home/docker/cp-test.txt ha-714000:/home/docker/cp-test_ha-714000-m02_ha-714000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000 "sudo cat /home/docker/cp-test_ha-714000-m02_ha-714000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m02:/home/docker/cp-test.txt ha-714000-m03:/home/docker/cp-test_ha-714000-m02_ha-714000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m03 "sudo cat /home/docker/cp-test_ha-714000-m02_ha-714000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m02:/home/docker/cp-test.txt ha-714000-m04:/home/docker/cp-test_ha-714000-m02_ha-714000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m04 "sudo cat /home/docker/cp-test_ha-714000-m02_ha-714000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp testdata/cp-test.txt ha-714000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile7924515/001/cp-test_ha-714000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m03:/home/docker/cp-test.txt ha-714000:/home/docker/cp-test_ha-714000-m03_ha-714000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000 "sudo cat /home/docker/cp-test_ha-714000-m03_ha-714000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m03:/home/docker/cp-test.txt ha-714000-m02:/home/docker/cp-test_ha-714000-m03_ha-714000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m02 "sudo cat /home/docker/cp-test_ha-714000-m03_ha-714000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m03:/home/docker/cp-test.txt ha-714000-m04:/home/docker/cp-test_ha-714000-m03_ha-714000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m04 "sudo cat /home/docker/cp-test_ha-714000-m03_ha-714000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp testdata/cp-test.txt ha-714000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile7924515/001/cp-test_ha-714000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m04:/home/docker/cp-test.txt ha-714000:/home/docker/cp-test_ha-714000-m04_ha-714000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000 "sudo cat /home/docker/cp-test_ha-714000-m04_ha-714000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m04:/home/docker/cp-test.txt ha-714000-m02:/home/docker/cp-test_ha-714000-m04_ha-714000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m02 "sudo cat /home/docker/cp-test_ha-714000-m04_ha-714000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 cp ha-714000-m04:/home/docker/cp-test.txt ha-714000-m03:/home/docker/cp-test_ha-714000-m04_ha-714000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-714000 ssh -n ha-714000-m03 "sudo cat /home/docker/cp-test_ha-714000-m04_ha-714000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0729 03:59:43.193213    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
E0729 04:00:14.820102    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m19.740783791s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-510000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-510000 --output=json --user=testUser: (3.126067666s)
--- PASS: TestJSONOutput/stop/Command (3.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-282000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-282000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.127708ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e44860ac-7c74-430d-bbcf-db2becce44af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-282000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9da2f246-453a-4722-93a8-2344bbd188c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19336"}}
	{"specversion":"1.0","id":"461eb0d9-a80c-4d43-8d9a-899bc8419425","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig"}}
	{"specversion":"1.0","id":"7a0fe12c-4550-4c56-ad82-9688121d49cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5b682fef-46bb-4666-a25c-646f009e5903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"48d7446e-3ef2-4670-af37-609652d93829","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube"}}
	{"specversion":"1.0","id":"09c1bc86-685f-4a67-85f0-d81d1e2a0491","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"49c9bc31-b9f3-4e87-a57c-50654a690db6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-282000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-282000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-834000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-834000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (92.844083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-834000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19336
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19336-945/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19336-945/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-834000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-834000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.611333ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-834000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-834000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
E0729 04:23:17.794102    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/functional-727000/client.crt: no such file or directory
E0729 04:23:20.025076    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19336-945/.minikube/profiles/addons-867000/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.578759667s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.75902675s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-834000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-834000: (2.576806541s)
--- PASS: TestNoKubernetes/serial/Stop (2.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-834000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-834000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (38.138583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-834000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-834000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-338000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-993000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-993000 --alsologtostderr -v=3: (1.802971125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-993000 -n old-k8s-version-993000: exit status 7 (31.27025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-993000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-561000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-561000 --alsologtostderr -v=3: (1.912395167s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-561000 -n no-preload-561000: exit status 7 (44.724625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-561000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-789000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-789000 --alsologtostderr -v=3: (2.068057708s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-789000 -n default-k8s-diff-port-789000: exit status 7 (54.280833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-789000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-469000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-469000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-469000 --alsologtostderr -v=3: (1.824408958s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-469000 -n newest-cni-469000: exit status 7 (59.993209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-469000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-022000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-022000 --alsologtostderr -v=3: (3.189742458s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-022000 -n embed-certs-022000: exit status 7 (59.5445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-022000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-418000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-418000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-418000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-418000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418000"

                                                
                                                
----------------------- debugLogs end: cilium-418000 [took: 2.140264833s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-418000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-418000
--- SKIP: TestNetworkPlugins/group/cilium (2.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-250000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-250000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard